id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
21496
https://en.wikipedia.org/wiki/Nucleic%20acid
Nucleic acid
Nucleic acids are large biomolecules that are crucial in all cells and viruses. They are composed of nucleotides, which are the monomer components: a 5-carbon sugar, a phosphate group and a nitrogenous base. The two main classes of nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). If the sugar is ribose, the polymer is RNA; if the sugar is deoxyribose, a variant of ribose, the polymer is DNA. Nucleic acids are chemical compounds that are found in nature. They carry information in cells and make up genetic material. These acids are very common in all living things, where they create, encode, and store information in every living cell of every life-form on Earth. In turn, they send and express that information inside and outside the cell nucleus. From the inner workings of the cell to the young of a living thing, they contain and provide information via the nucleic acid sequence. This gives the RNA and DNA their unmistakable 'ladder-step' order of nucleotides within their molecules. Both play a crucial role in directing protein synthesis. Strings of nucleotides are bonded to form spiraling backbones and assembled into chains of bases or base-pairs selected from the five primary, or canonical, nucleobases. RNA usually forms a chain of single bases, whereas DNA forms a chain of base pairs. The bases found in RNA and DNA are: adenine, cytosine, guanine, thymine, and uracil. Thymine occurs only in DNA and uracil only in RNA. Using amino acids and protein synthesis, the specific sequence in DNA of these nucleobase-pairs helps to keep and send coded instructions as genes. In RNA, base-pair sequencing helps to make new proteins that determine most chemical processes of all life forms. History Nucleic acid was, partially, first discovered by Friedrich Miescher in 1869 at the University of Tübingen, Germany. He discovered a new substance, which he called nuclein and which - depending on how his results are interpreted in detail - can be seen in modern terms either as a nucleic acid-histone complex or as the actual nucleic acid. Phoebus Levene determined the basic structure of nucleic acids. In the early 1880s, Albrecht Kossel further purified the nucleid acid substance and discovered its highly acidic properties. He later also identified the nucleobases. In 1889 Richard Altmann created the term nucleic acid – at that time DNA and RNA were not differentiated. In 1938 Astbury and Bell published the first X-ray diffraction pattern of DNA. In 1944 the Avery–MacLeod–McCarty experiment showed that DNA is the carrier of genetic information and in 1953 Watson and Crick proposed the double-helix structure of DNA. Experimental studies of nucleic acids constitute a major part of modern biological and medical research, and form a foundation for genome and forensic science, and the biotechnology and pharmaceutical industries. Occurrence and nomenclature The term nucleic acid is the overall name for DNA and RNA, members of a family of biopolymers, and is a type of polynucleotide. Nucleic acids were named for their initial discovery within the nucleus, and for the presence of phosphate groups (related to phosphoric acid). Although first discovered within the nucleus of eukaryotic cells, nucleic acids are now known to be found in all life forms including within bacteria, archaea, mitochondria, chloroplasts, and viruses (There is debate as to whether viruses are living or non-living). All living cells contain both DNA and RNA (except some cells such as mature red blood cells), while viruses contain either DNA or RNA, but usually not both. The basic component of biological nucleic acids is the nucleotide, each of which contains a pentose sugar (ribose or deoxyribose), a phosphate group, and a nucleobase. Nucleic acids are also generated within the laboratory, through the use of enzymes (DNA and RNA polymerases) and by solid-phase chemical synthesis. Molecular composition and size Nucleic acids are generally very large molecules. Indeed, DNA molecules are probably the largest individual molecules known. Well-studied biological nucleic acid molecules range in size from 21 nucleotides (small interfering RNA) to large chromosomes (human chromosome 1 is a single molecule that contains 247 million base pairs). In most cases, naturally occurring DNA molecules are double-stranded and RNA molecules are single-stranded. There are numerous exceptions, however—some viruses have genomes made of double-stranded RNA and other viruses have single-stranded DNA genomes, and, in some circumstances, nucleic acid structures with three or four strands can form. Nucleic acids are linear polymers (chains) of nucleotides. Each nucleotide consists of three components: a purine or pyrimidine nucleobase (sometimes termed nitrogenous base or simply base), a pentose sugar, and a phosphate group which makes the molecule acidic. The substructure consisting of a nucleobase plus sugar is termed a nucleoside. Nucleic acid types differ in the structure of the sugar in their nucleotides–DNA contains 2'-deoxyribose while RNA contains ribose (where the only difference is the presence of a hydroxyl group). Also, the nucleobases found in the two nucleic acid types are different: adenine, cytosine, and guanine are found in both RNA and DNA, while thymine occurs in DNA and uracil occurs in RNA. The sugars and phosphates in nucleic acids are connected to each other in an alternating chain (sugar-phosphate backbone) through phosphodiester linkages. In conventional nomenclature, the carbons to which the phosphate groups attach are the 3'-end and the 5'-end carbons of the sugar. This gives nucleic acids directionality, and the ends of nucleic acid molecules are referred to as 5'-end and 3'-end. The nucleobases are joined to the sugars via an N-glycosidic linkage involving a nucleobase ring nitrogen (N-1 for pyrimidines and N-9 for purines) and the 1' carbon of the pentose sugar ring. Non-standard nucleosides are also found in both RNA and DNA and usually arise from modification of the standard nucleosides within the DNA molecule or the primary (initial) RNA transcript. Transfer RNA (tRNA) molecules contain a particularly large number of modified nucleosides. Topology Double-stranded nucleic acids are made up of complementary sequences, in which extensive Watson-Crick base pairing results in a highly repeated and quite uniform nucleic acid double-helical three-dimensional structure. In contrast, single-stranded RNA and DNA molecules are not constrained to a regular double helix, and can adopt highly complex three-dimensional structures that are based on short stretches of intramolecular base-paired sequences including both Watson-Crick and noncanonical base pairs, and a wide range of complex tertiary interactions. Nucleic acid molecules are usually unbranched and may occur as linear and circular molecules. For example, bacterial chromosomes, plasmids, mitochondrial DNA, and chloroplast DNA are usually circular double-stranded DNA molecules, while chromosomes of the eukaryotic nucleus are usually linear double-stranded DNA molecules. Most RNA molecules are linear, single-stranded molecules, but both circular and branched molecules can result from RNA splicing reactions. The total amount of pyrimidines in a double-stranded DNA molecule is equal to the total amount of purines. The diameter of the helix is about 20 Å. Sequences One DNA or RNA molecule differs from another primarily in the sequence of nucleotides. Nucleotide sequences are of great importance in biology since they carry the ultimate instructions that encode all biological molecules, molecular assemblies, subcellular and cellular structures, organs, and organisms, and directly enable cognition, memory, and behavior. Enormous efforts have gone into the development of experimental methods to determine the nucleotide sequence of biological DNA and RNA molecules, and today hundreds of millions of nucleotides are sequenced daily at genome centers and smaller laboratories worldwide. In addition to maintaining the GenBank nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI web site. Types Deoxyribonucleic acid Deoxyribonucleic acid (DNA) is a nucleic acid containing the genetic instructions used in the development and functioning of all known living organisms. The chemical DNA was discovered in 1869, but its role in genetic inheritance was not demonstrated until 1943. The DNA segments that carry this genetic information are called genes. Other DNA sequences have structural purposes, or are involved in regulating the use of this genetic information. Along with RNA and proteins, DNA is one of the three major macromolecules that are essential for all known forms of life. DNA consists of two long polymers of monomer units called nucleotides, with backbones made of sugars and phosphate groups joined by ester bonds. These two strands are oriented in opposite directions to each other and are, therefore, antiparallel. Attached to each sugar is one of four types of molecules called nucleobases (informally, bases). It is the sequence of these four nucleobases along the backbone that encodes genetic information. This information specifies the sequence of the amino acids within proteins according to the genetic code. The code is read by copying stretches of DNA into the related nucleic acid RNA in a process called transcription. Within cells, DNA is organized into long sequences called chromosomes. During cell division these chromosomes are duplicated in the process of DNA replication, providing each cell its own complete set of chromosomes. Eukaryotic organisms (animals, plants, fungi, and protists) store most of their DNA inside the cell nucleus and some of their DNA in organelles, such as mitochondria or chloroplasts. In contrast, prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm. Within the chromosomes, chromatin proteins such as histones compact and organize DNA. These compact structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed. Ribonucleic acid Ribonucleic acid (RNA) functions in converting genetic information from genes into the amino acid sequences of proteins. The three universal types of RNA include transfer RNA (tRNA), messenger RNA (mRNA), and ribosomal RNA (rRNA). Messenger RNA acts to carry genetic sequence information between DNA and ribosomes, directing protein synthesis and carries instructions from DNA in the nucleus to ribosome . Ribosomal RNA reads the DNA sequence, and catalyzes peptide bond formation. Transfer RNA serves as the carrier molecule for amino acids to be used in protein synthesis, and is responsible for decoding the mRNA. In addition, many other classes of RNA are now known. Artificial nucleic acid Artificial nucleic acid analogues have been designed and synthesized. They include peptide nucleic acid, morpholino- and locked nucleic acid, glycol nucleic acid, and threose nucleic acid. Each of these is distinguished from naturally occurring DNA or RNA by changes to the backbone of the molecules.
Biology and health sciences
Biochemistry and molecular biology
null
21497
https://en.wikipedia.org/wiki/Nitrate
Nitrate
Nitrate is a polyatomic ion with the chemical formula . Salts containing this ion are called nitrates. Nitrates are common components of fertilizers and explosives. Almost all inorganic nitrates are soluble in water. An example of an insoluble nitrate is bismuth oxynitrate. Chemical structure The nitrate anion is the conjugate base of nitric acid, consisting of one central nitrogen atom surrounded by three identically bonded oxygen atoms in a trigonal planar arrangement. The nitrate ion carries a formal charge of −1. This charge results from a combination formal charge in which each of the three oxygens carries a − charge, whereas the nitrogen carries a +1 charge, all these adding up to formal charge of the polyatomic nitrate ion. This arrangement is commonly used as an example of resonance. Like the isoelectronic carbonate ion, the nitrate ion can be represented by three resonance structures: Chemical and biochemical properties In the anion, the oxidation state of the central nitrogen atom is V (+5). This corresponds to the highest possible oxidation number of nitrogen. Nitrate is a potentially powerful oxidizer as evidenced by its explosive behaviour at high temperature when it is detonated in ammonium nitrate (), or black powder, ignited by the shock wave of a primary explosive. However, in contrast to red fuming nitric acid (), or concentrated nitric acid (), nitrate dissolved in aqueous solution at neutral or high pH is only a weak oxidizing agent and is stable under sterile, or aseptic, conditions, in the absence of microorganisms. To increase its oxidation power, acidic conditions and high concentrations are needed, under which nitrate transforms into nitric acid. This behaviour is consistent with the general theory of reduction-oxidation (redox) in electrochemistry: oxidizing power is exacerbated under acidic conditions while the power of reducing agents is reinforced under basic conditions. This can be illustrated by means of a Pourbaix diagram (Eh–pH diagram) drawn using the Nernst equation and the corresponding redox reactions. During the reduction of oxidizers, the oxidation state decreases and oxide ions () in excess released in water by the reaction are more easily protonated under acid conditions () which drives the reduction reaction to the right according to Le Chatelier's principle. For the oxidation of reducing agents, the reverse occurs: as the oxidation state increases, oxide anions are needed to neutralise the surplus of positive charges born by the central atom. As basic conditions favor the production of oxide anions (), this drives the chemical equilibrium of the oxidation reaction to the right. Meanwhile, nitrate is used as a powerful terminal electron acceptor by denitrifying bacteria to deliver the energy they need to thrive. Under anaerobic conditions, nitrate is the strongest electron acceptor used by prokaryote microorganisms (bacteria and archaea) to respirate. The redox couple / is at the top of the redox scale for the anaerobic respiration, just below the couple oxygen (/), but above the couples Mn(IV)/Mn(II), Fe(III)/Fe(II), /, /. In natural waters, inevitably contaminated by microorganisms, nitrate is a quite unstable and labile dissolved chemical species because it is metabolised by denitrifying bacteria. Water samples for nitrate/nitrite analyses need to be kept at 4 °C in a refrigerated room and analysed as quick as possible to limit the loss of nitrate. In the first step of the denitrification process, dissolved nitrate () is catalytically reduced into nitrite () by the enzymatic activity of bacteria. In aqueous solution, dissolved nitrite, N(III), is a more powerful oxidizer that nitrate, N(V), because it has to accept less electrons and its reduction is less kinetically hindered than that of nitrate. During the biological denitrification process, further nitrite reduction also gives rise to another powerful oxidizing agent: nitric oxide (NO). NO can fix on myoglobin accentuating its red coloration. NO is an important biological signaling molecule and intervenes in the vasodilation process, but it can also produce free radicals in biological tissues, accelerating their degradation and aging process. The reactive oxygen species (ROS) generated by NO contribute to the oxidative stress, a condition involved in vascular dysfunction and atherogenesis. Detection in chemical analysis The nitrate anion is commonly analysed in water by ion chromatography (IC) along with other anions also present in solution. The main advantage of IC is its ease and the simultaneous analysis of all the anions present in the aqueous sample. Other methods for the specific detection of nitrate rely on its conversion to nitrite followed by nitrite-specific tests. The reduction of nitrate to nitrite is effected by a copper-cadmium material. The sample is introduced in a flow injection analyzer, and the resulting nitrite-containing effluent is then combined with a reagent for colorimetric or electrochemical detection. The most popular of these assays is the Griess test, whereby nitrite is converted to a deeply colored azo dye suited for UV-vis spectroscopic analysis. The method exploits the reactivity of nitrous acid derived from acidification of nitrite. Nitrous acid selectively reacts with aromatic amines to give diazonium salts, which in turn couple with a second reagent to give the azo dye. The detection limit is 0.02 to 2 μM. Such methods have been highly adapted to biological samples. Occurrence and production Nitrate salts are found naturally on earth in arid environments as large deposits, particularly of nitratine, a major source of sodium nitrate. Nitrates are produced by a number of species of nitrifying bacteria in the natural environment using ammonia or urea as a source of nitrogen and source of free energy. Nitrate compounds for gunpowder were historically produced, in the absence of mineral nitrate sources, by means of various fermentation processes using urine and dung. Lightning strikes in earth's nitrogen- and oxygen-rich atmosphere produce a mixture of oxides of nitrogen, which form nitrous ions and nitrate ions, which are washed from the atmosphere by rain or in occult deposition. Nitrates are produced industrially from nitric acid. Uses Agriculture Nitrate is a chemical compound that serves as a primary form of nitrogen for many plants. This essential nutrient is used by plants to synthesize proteins, nucleic acids, and other vital organic molecules. The transformation of atmospheric nitrogen into nitrate is facilitated by certain bacteria and lightning in the nitrogen cycle, which exemplifies nature's ability to convert a relatively inert molecule into a form that is crucial for biological productivity. Nitrates are used as fertilizers in agriculture because of their high solubility and biodegradability. The main nitrate fertilizers are ammonium, sodium, potassium, calcium, and magnesium salts. Several billion kilograms are produced annually for this purpose. The significance of nitrate extends beyond its role as a nutrient since it acts as a signaling molecule in plants, regulating processes such as root growth, flowering, and leaf development. While nitrate is beneficial for agriculture since it enhances soil fertility and crop yields, its excessive use can lead to nutrient runoff, water pollution, and the proliferation of aquatic dead zones. Therefore, sustainable agricultural practices that balance productivity with environmental stewardship are necessary. Nitrate's importance in ecosystems is evident since it supports the growth and development of plants, contributing to biodiversity and ecological balance. Firearms Nitrates are used as oxidizing agents, most notably in explosives, where the rapid oxidation of carbon compounds liberates large volumes of gases (see gunpowder as an example). Industrial Sodium nitrate is used to remove air bubbles from molten glass and some ceramics. Mixtures of molten salts are used to harden the surface of some metals. Photographic film Nitrate was also used as a film stock through nitrocellulose. Due to its high combustibility, the film making studios swapped to cellulose acetate safety film in 1950. Medicinal and pharmaceutical use In the medical field, nitrate-derived organic esters, such as glyceryl trinitrate, isosorbide dinitrate, and isosorbide mononitrate, are used in the prophylaxis and management of acute coronary syndrome, myocardial infarction, acute pulmonary oedema. This class of drug, to which amyl nitrite also belongs, is known as nitrovasodilators. Toxicity and safety The two areas of concerns about the toxicity of nitrate are the following: nitrate reduced by the microbial activity of nitrate reducing bacteria is the precursor of nitrite in water and in the lower gastrointestinal tract. Nitrite is a precursor to carcinogenic nitrosamines, and; via the formation of nitrite, nitrate is implicated in methemoglobinemia, a disorder of hemoglobin in red blood cells susceptible to especially affect infants and toddlers. Methemoglobinemia One of the most common cause of methemoglobinemia in infants is due to the ingestion of nitrates and nitrites through well water or foods. In fact, nitrates (), often present at too high concentration in drinkwater, are only the precursor chemical species of nitrites (), the real culprits of methemoglobinemia. Nitrites produced by the microbial reduction of nitrate (directly in the drinkwater, or after ingestion by the infant, in his digestive system) are more powerful oxidizers than nitrates and are the chemical agent really responsible for the oxidation of Fe2+ into Fe3+ in the tetrapyrrole heme of hemoglobin. Indeed, nitrate anions are too weak oxidizers in aqueous solution to be able to directly, or at least sufficiently rapidly, oxidize Fe2+ into Fe3+, because of kinetics limitations. Infants younger than 4 months are at greater risk given that they drink more water per body weight, they have a lower NADH-cytochrome b5 reductase activity, and they have a higher level of fetal hemoglobin which converts more easily to methemoglobin. Additionally, infants are at an increased risk after an episode of gastroenteritis due to the production of nitrites by bacteria. However, other causes than nitrates can also affect infants and pregnant women. Indeed, the blue baby syndrome can also be caused by a number of other factors such as the cyanotic heart disease, a congenital heart defect resulting in low levels of oxygen in the blood, or by gastric upset, such as diarrheal infection, protein intolerance, heavy metal toxicity, etc. Drinking water standards Through the Safe Drinking Water Act, the United States Environmental Protection Agency has set a maximum contaminant level of 10 mg/L or 10 ppm of nitrate in drinking water. An acceptable daily intake (ADI) for nitrate ions was established in the range of 0–3.7 mg (kg body weight)−1 day−1 by the Joint FAO/WHO Expert Committee on Food Additives (JEFCA). Aquatic toxicity In freshwater or estuarine systems close to land, nitrate can reach concentrations that are lethal to fish. While nitrate is much less toxic than ammonia, levels over 30 ppm of nitrate can inhibit growth, impair the immune system and cause stress in some aquatic species. Nitrate toxicity remains a subject of debate. In most cases of excess nitrate concentrations in aquatic systems, the primary sources are wastewater discharges, as well as surface runoff from agricultural or landscaped areas that have received excess nitrate fertilizer. The resulting eutrophication and algae blooms result in anoxia and dead zones. As a consequence, as nitrate forms a component of total dissolved solids, they are widely used as an indicator of water quality. Human impacts on ecosystems through nitrate deposition Nitrate deposition into ecosystems has markedly increased due to anthropogenic activities, notably from the widespread application of nitrogen-rich fertilizers in agriculture and the emissions from fossil fuel combustion. Annually, about 195 million metric tons of synthetic nitrogen fertilizers are used worldwide, with nitrates constituting a significant portion of this amount. In regions with intensive agriculture, such as parts of the U.S., China, and India, the use of nitrogen fertilizers can exceed 200 kilograms per hectare. The impact of increased nitrate deposition extends beyond plant communities to affect soil microbial populations. The change in soil chemistry and nutrient dynamics can disrupt the natural processes of nitrogen fixation, nitrification, and denitrification, leading to altered microbial community structures and functions. This disruption can further impact the nutrient cycling and overall ecosystem health. Dietary nitrate A source of nitrate in the human diets arises from the consumption of leafy green foods, such as spinach and arugula. can be present in beetroot juice. Drinking water represents also a primary nitrate intake source. Nitrate ingestion rapidly increases the plasma nitrate concentration by a factor of 2 to 3, and this elevated nitrate concentration can be maintained for more than 2 weeks. Increased plasma nitrate enhances the production of nitric oxide, NO. Nitric oxide is a physiological signaling molecule which intervenes in, among other things, regulation of muscle blood flow and mitochondrial respiration. Cured meats Nitrite () consumption is primarily determined by the amount of processed meats eaten, and the concentration of nitrates () added to these meats (bacon, sausages…) for their curing. Although nitrites are the nitrogen species chiefly used in meat curing, nitrates are used as well and can be transformed into nitrite by microorganisms, or in the digestion process, starting by their dissolution in saliva and their contact with the microbiota of the mouth. Nitrites lead to the formation of carcinogenic nitrosamines. The production of nitrosamines may be inhibited by the use of the antioxidants vitamin C and the alpha-tocopherol form of vitamin E during curing. Many meat processors claim their meats (e.g. bacon) is "uncured" – which is a marketing claim with no factual basis: there is no such thing as "uncured" bacon (as that would be, essentially, raw sliced pork belly). "Uncured" meat is in fact actually cured with nitrites with virtually no distinction in process – the only difference being the USDA labeling requirement between nitrite of vegetable origin (such as from celery) vs. "synthetic" sodium nitrite. An analogy would be purified "sea salt" vs. sodium chloride – both being exactly the same chemical with the only essential difference being the origin. Anti-hypertensive diets, such as the DASH diet, typically contain high levels of nitrates, which are first reduced to nitrite in the saliva, as detected in saliva testing, prior to forming nitric oxide (NO). Domestic animal feed Symptoms of nitrate poisoning in domestic animals include increased heart rate and respiration; in advanced cases blood and tissue may turn a blue or brown color. Feed can be tested for nitrate; treatment consists of supplementing or substituting existing supplies with lower nitrate material. Safe levels of nitrate for various types of livestock are as follows: The values above are on a dry (moisture-free) basis. Salts and covalent derivatives Nitrate formation with elements of the periodic table:
Physical sciences
Salts
null
21505
https://en.wikipedia.org/wiki/Nucleotide
Nucleotide
Nucleotides are organic molecules composed of a nitrogenous base, a pentose sugar and a phosphate. They serve as monomeric units of the nucleic acid polymers – deoxyribonucleic acid (DNA) and ribonucleic acid (RNA), both of which are essential biomolecules within all life-forms on Earth. Nucleotides are obtained in the diet and are also synthesized from common nutrients by the liver. Nucleotides are composed of three subunit molecules: a nucleobase, a five-carbon sugar (ribose or deoxyribose), and a phosphate group consisting of one to three phosphates. The four nucleobases in DNA are guanine, adenine, cytosine, and thymine; in RNA, uracil is used in place of thymine. Nucleotides also play a central role in metabolism at a fundamental, cellular level. They provide chemical energy—in the form of the nucleoside triphosphates, adenosine triphosphate (ATP), guanosine triphosphate (GTP), cytidine triphosphate (CTP), and uridine triphosphate (UTP)—throughout the cell for the many cellular functions that demand energy, including: amino acid, protein and cell membrane synthesis, moving the cell and cell parts (both internally and intercellularly), cell division, etc.. In addition, nucleotides participate in cell signaling (cyclic guanosine monophosphate or cGMP and cyclic adenosine monophosphate or cAMP) and are incorporated into important cofactors of enzymatic reactions (e.g., coenzyme A, FAD, FMN, NAD, and NADP+). In experimental biochemistry, nucleotides can be radiolabeled using radionuclides to yield radionucleotides. 5-nucleotides are also used in flavour enhancers as food additive to enhance the umami taste, often in the form of a yeast extract. Structure A nucleotide is composed of three distinctive chemical sub-units: a five-carbon sugar molecule, a nucleobase (the two of which together are called a nucleoside), and one phosphate group. With all three joined, a nucleotide is also termed a "nucleoside monophosphate", "nucleoside diphosphate" or "nucleoside triphosphate", depending on how many phosphates make up the phosphate group. In nucleic acids, nucleotides contain either a purine or a pyrimidine base—i.e., the nucleobase molecule, also known as a nitrogenous base—and are termed ribonucleotides if the sugar is ribose, or deoxyribonucleotides if the sugar is deoxyribose. Individual phosphate molecules repetitively connect the sugar-ring molecules in two adjacent nucleotide monomers, thereby connecting the nucleotide monomers of a nucleic acid end-to-end into a long chain. These chain-joins of sugar and phosphate molecules create a 'backbone' strand for a single- or double helix. In any one strand, the chemical orientation (directionality) of the chain-joins runs from the 5'-end to the 3'-end (read: 5 prime-end to 3 prime-end)—referring to the five carbon sites on sugar molecules in adjacent nucleotides. In a double helix, the two strands are oriented in opposite directions, which permits base pairing and complementarity between the base-pairs, all which is essential for replicating or transcribing the encoded information found in DNA. Nucleic acids then are polymeric macromolecules assembled from nucleotides, the monomer-units of nucleic acids. The purine bases adenine and guanine and pyrimidine base cytosine occur in both DNA and RNA, while the pyrimidine bases thymine (in DNA) and uracil (in RNA) occur in just one. Adenine forms a base pair with thymine with two hydrogen bonds, while guanine pairs with cytosine with three hydrogen bonds. In addition to being building blocks for the construction of nucleic acid polymers, singular nucleotides play roles in cellular energy storage and provision, cellular signaling, as a source of phosphate groups used to modulate the activity of proteins and other signaling molecules, and as enzymatic cofactors, often carrying out redox reactions. Signaling cyclic nucleotides are formed by binding the phosphate group twice to the same sugar molecule, bridging the 5'- and 3'- hydroxyl groups of the sugar. Some signaling nucleotides differ from the standard single-phosphate group configuration, in having multiple phosphate groups attached to different positions on the sugar. Nucleotide cofactors include a wider range of chemical groups attached to the sugar via the glycosidic bond, including nicotinamide and flavin, and in the latter case, the ribose sugar is linear rather than forming the ring seen in other nucleotides. Synthesis Nucleotides can be synthesized by a variety of means, both in vitro and in vivo. In vitro, protecting groups may be used during laboratory production of nucleotides. A purified nucleoside is protected to create a phosphoramidite, which can then be used to obtain analogues not found in nature and/or to synthesize an oligonucleotide. In vivo, nucleotides can be synthesized de novo or recycled through salvage pathways. The components used in de novo nucleotide synthesis are derived from biosynthetic precursors of carbohydrate and amino acid metabolism, and from ammonia and carbon dioxide. Recently it has been also demonstrated that cellular bicarbonate metabolism can be regulated by mTORC1 signaling. The liver is the major organ of de novo synthesis of all four nucleotides. De novo synthesis of pyrimidines and purines follows two different pathways. Pyrimidines are synthesized first from aspartate and carbamoyl-phosphate in the cytoplasm to the common precursor ring structure orotic acid, onto which a phosphorylated ribosyl unit is covalently linked. Purines, however, are first synthesized from the sugar template onto which the ring synthesis occurs. For reference, the syntheses of the purine and pyrimidine nucleotides are carried out by several enzymes in the cytoplasm of the cell, not within a specific organelle. Nucleotides undergo breakdown such that useful parts can be reused in synthesis reactions to create new nucleotides. Pyrimidine ribonucleotide synthesis The synthesis of the pyrimidines CTP and UTP occurs in the cytoplasm and starts with the formation of carbamoyl phosphate from glutamine and CO2. Next, aspartate carbamoyltransferase catalyzes a condensation reaction between aspartate and carbamoyl phosphate to form carbamoyl aspartic acid, which is cyclized into 4,5-dihydroorotic acid by dihydroorotase. The latter is converted to orotate by dihydroorotate oxidase. The net reaction is: (S)-Dihydroorotate + O2 → Orotate + H2O2 Orotate is covalently linked with a phosphorylated ribosyl unit. The covalent linkage between the ribose and pyrimidine occurs at position C1 of the ribose unit, which contains a pyrophosphate, and N1 of the pyrimidine ring. Orotate phosphoribosyltransferase (PRPP transferase) catalyzes the net reaction yielding orotidine monophosphate (OMP): Orotate + 5-Phospho-α-D-ribose 1-diphosphate (PRPP) → Orotidine 5'-phosphate + Pyrophosphate Orotidine 5'-monophosphate is decarboxylated by orotidine-5'-phosphate decarboxylase to form uridine monophosphate (UMP). PRPP transferase catalyzes both the ribosylation and decarboxylation reactions, forming UMP from orotic acid in the presence of PRPP. It is from UMP that other pyrimidine nucleotides are derived. UMP is phosphorylated by two kinases to uridine triphosphate (UTP) via two sequential reactions with ATP. First, the diphosphate from UDP is produced, which in turn is phosphorylated to UTP. Both steps are fueled by ATP hydrolysis: ATP + UMP → ADP + UDP UDP + ATP → UTP + ADP CTP is subsequently formed by the amination of UTP by the catalytic activity of CTP synthetase. Glutamine is the NH3 donor and the reaction is fueled by ATP hydrolysis, too: UTP + Glutamine + ATP + H2O → CTP + ADP + Pi Cytidine monophosphate (CMP) is derived from cytidine triphosphate (CTP) with subsequent loss of two phosphates. Purine ribonucleotide synthesis The atoms that are used to build the purine nucleotides come from a variety of sources: The de novo synthesis of purine nucleotides by which these precursors are incorporated into the purine ring proceeds by a 10-step pathway to the branch-point intermediate IMP, the nucleotide of the base hypoxanthine. AMP and GMP are subsequently synthesized from this intermediate via separate, two-step pathways. Thus, purine moieties are initially formed as part of the ribonucleotides rather than as free bases. Six enzymes take part in IMP synthesis. Three of them are multifunctional: GART (reactions 2, 3, and 5) PAICS (reactions 6, and 7) ATIC (reactions 9, and 10) The pathway starts with the formation of PRPP. PRPS1 is the enzyme that activates R5P, which is formed primarily by the pentose phosphate pathway, to PRPP by reacting it with ATP. The reaction is unusual in that a pyrophosphoryl group is directly transferred from ATP to C1 of R5P and that the product has the α configuration about C1. This reaction is also shared with the pathways for the synthesis of Trp, His, and the pyrimidine nucleotides. Being on a major metabolic crossroad and requiring much energy, this reaction is highly regulated. In the first reaction unique to purine nucleotide biosynthesis, PPAT catalyzes the displacement of PRPP's pyrophosphate group (PPi) by an amide nitrogen donated from either glutamine (N), glycine (N&C), aspartate (N), folic acid (C1), or CO2. This is the committed step in purine synthesis. The reaction occurs with the inversion of configuration about ribose C1, thereby forming β-5-phosphorybosylamine (5-PRA) and establishing the anomeric form of the future nucleotide. Next, a glycine is incorporated fueled by ATP hydrolysis, and the carboxyl group forms an amine bond to the NH2 previously introduced. A one-carbon unit from folic acid coenzyme N10-formyl-THF is then added to the amino group of the substituted glycine followed by the closure of the imidazole ring. Next, a second NH2 group is transferred from glutamine to the first carbon of the glycine unit. A carboxylation of the second carbon of the glycin unit is concomitantly added. This new carbon is modified by the addition of a third NH2 unit, this time transferred from an aspartate residue. Finally, a second one-carbon unit from formyl-THF is added to the nitrogen group and the ring is covalently closed to form the common purine precursor inosine monophosphate (IMP). Inosine monophosphate is converted to adenosine monophosphate in two steps. First, GTP hydrolysis fuels the addition of aspartate to IMP by adenylosuccinate synthase, substituting the carbonyl oxygen for a nitrogen and forming the intermediate adenylosuccinate. Fumarate is then cleaved off forming adenosine monophosphate. This step is catalyzed by adenylosuccinate lyase. Inosine monophosphate is converted to guanosine monophosphate by the oxidation of IMP forming xanthylate, followed by the insertion of an amino group at C2. NAD+ is the electron acceptor in the oxidation reaction. The amide group transfer from glutamine is fueled by ATP hydrolysis. Pyrimidine and purine degradation In humans, pyrimidine rings (C, T, U) can be degraded completely to CO2 and NH3 (urea excretion). That having been said, purine rings (G, A) cannot. Instead, they are degraded to the metabolically inert uric acid which is then excreted from the body. Uric acid is formed when GMP is split into the base guanine and ribose. Guanine is deaminated to xanthine which in turn is oxidized to uric acid. This last reaction is irreversible. Similarly, uric acid can be formed when AMP is deaminated to IMP from which the ribose unit is removed to form hypoxanthine. Hypoxanthine is oxidized to xanthine and finally to uric acid. Instead of uric acid secretion, guanine and IMP can be used for recycling purposes and nucleic acid synthesis in the presence of PRPP and aspartate (NH3 donor). Prebiotic synthesis of nucleotides Theories about the origin of life require knowledge of chemical pathways that permit formation of life's key building blocks under plausible prebiotic conditions. The RNA world hypothesis holds that in the primordial soup there existed free-floating ribonucleotides, the fundamental molecules that combine in series to form RNA. Complex molecules like RNA must have arisen from small molecules whose reactivity was governed by physico-chemical processes. RNA is composed of purine and pyrimidine nucleotides, both of which are necessary for reliable information transfer, and thus Darwinian evolution. Becker et al. showed how pyrimidine nucleosides can be synthesized from small molecules and ribose, driven solely by wet-dry cycles. Purine nucleosides can be synthesized by a similar pathway. 5'-mono- and di-phosphates also form selectively from phosphate-containing minerals, allowing concurrent formation of polyribonucleotides with both the purine and pyrimidine bases. Thus a reaction network towards the purine and pyrimidine RNA building blocks can be established starting from simple atmospheric or volcanic molecules. Unnatural base pair (UBP) An unnatural base pair (UBP) is a designed subunit (or nucleobase) of DNA which is created in a laboratory and does not occur in nature. Examples include d5SICS and dNaM. These artificial nucleotides bearing hydrophobic nucleobases, feature two fused aromatic rings that form a (d5SICS–dNaM) complex or base pair in DNA. E. coli have been induced to replicate a plasmid containing UBPs through multiple generations. This is the first known example of a living organism passing along an expanded genetic code to subsequent generations. Medical applications of synthetic nucleotides The applications of synthetic nucleotides vary widely and include disease diagnosis, treatment, or precision medicine. Antiviral or Antiretroviral agents: several nucleotide derivatives have been used in the treatment against infection with Hepatitis and HIV. Examples of direct nucleoside analog reverse-transcriptase inhibitors (NRTIs) include Tenofovir disoproxil, Tenofovir alafenamide, and Sofosbuvir. On the other hand, agents such as Mericitabine, Lamivudine, Entecavir and Telbivudine must first undergo metabolization via phosphorylation to become activated. Antisense oligonucleotides (ASO): synthetic oligonucleotides have been used in the treatment of rare heritable diseases since they can bind specific RNA transcripts and ultimately modulate protein expression. Spinal muscular atrophy, amyotrophic lateral sclerosis, homozygous familial hypercholesterolemia, and primary hyperoxaluria type 1 are all amenable to ASO-based therapy. The application of oligonucleotides is a new frontier in precision medicine and management of conditions which are untreatable. Synthetic guide RNA (gRNA): synthetic nucleotides can be used to design gRNA which are essential for the proper function of gene-editing technologies such as CRISPR-Cas9. Length unit Nucleotide (abbreviated "nt") is a common unit of length for single-stranded nucleic acids, similar to how base pair is a unit of length for double-stranded nucleic acids. Abbreviation codes for degenerate bases The IUPAC has designated the symbols for nucleotides. Apart from the five (A, G, C, T/U) bases, often degenerate bases are used especially for designing PCR primers. These nucleotide codes are listed here. Some primer sequences may also include the character "I", which codes for the non-standard nucleotide inosine. Inosine occurs in tRNAs and will pair with adenine, cytosine, or thymine. This character does not appear in the following table, however, because it does not represent a degeneracy. While inosine can serve a similar function as the degeneracy "H", it is an actual nucleotide, rather than a representation of a mix of nucleotides that covers each possible pairing needed.
Biology and health sciences
Nucleic acids
Biology
21506
https://en.wikipedia.org/wiki/Numerical%20analysis
Numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods that attempt to find approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences like economics, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics (predicting the motions of planets, stars and galaxies), numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living cells in medicine and biology. Before modern computers, numerical methods often relied on hand interpolation formulas, using data from large printed tables. Since the mid 20th century, computers calculate the required functions instead, but many of the same formulas continue to be used in software algorithms. The numerical point of view goes back to the earliest mathematical writings. A tablet from the Yale Babylonian Collection (YBC 7289), gives a sexagesimal numerical approximation of the square root of 2, the length of the diagonal in a unit square. Numerical analysis continues this long tradition: rather than giving exact symbolic answers translated into digits and applicable only to real-world measurements, approximate solutions within specified error bounds are used. Applications The overall goal of the field of numerical analysis is the design and analysis of techniques to give approximate but accurate solutions to a wide variety of hard problems, many of which are infeasible to solve symbolically: Advanced numerical methods are essential in making numerical weather prediction feasible. Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equations. Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving partial differential equations numerically. In the financial field, (private investment funds) and other financial institutions use quantitative finance tools from numerical analysis to attempt to calculate the value of stocks and derivatives more precisely than other market participants. Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments and fuel needs. Historically, such algorithms were developed within the overlapping field of operations research. Insurance companies use numerical programs for actuarial analysis. History The field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is obvious from the names of important algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or Euler's method. The origins of modern numerical analysis are often linked to a 1947 paper by John von Neumann and Herman Goldstine, but others consider modern numerical analysis to go back to work by E. T. Whittaker in 1912. To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables, often calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve very good numerical estimates of some functions. The canonical work in the field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a very large number of commonly used formulas and functions and their values at many points. The function values are no longer very useful when a computer is available, but the large listing of formulas can still be very handy. The mechanical calculator was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, since now longer and more complicated calculations could be done. The Leslie Fox Prize for Numerical Analysis was initiated in 1985 by the Institute of Mathematics and its Applications. Key concepts Direct and iterative methods Direct methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer if they were performed in infinite precision arithmetic. Examples include Gaussian elimination, the QR factorization method for solving systems of linear equations, and the simplex method of linear programming. In practice, finite precision is used and the result is an approximation of the true solution (assuming stability). In contrast to direct methods, iterative methods are not expected to terminate in a finite number of steps, even if infinite precision were possible. Starting from an initial guess, iterative methods form successive approximations that converge to the exact solution only in the limit. A convergence test, often involving the residual, is specified in order to decide when a sufficiently accurate solution has (hopefully) been found. Even using infinite precision arithmetic these methods would not reach the solution within a finite number of steps (in general). Examples include Newton's method, the bisection method, and Jacobi iteration. In computational matrix algebra, iterative methods are generally needed for large problems. Iterative methods are more common than direct methods in numerical analysis. Some methods are direct in principle but are usually used as though they were not, e.g. GMRES and the conjugate gradient method. For these methods the number of steps needed to obtain the exact solution is so large that an approximation is accepted in the same manner as for an iterative method. As an example, consider the problem of solving 3x3 + 4 = 28 for the unknown quantity x. For the iterative method, apply the bisection method to f(x) = 3x3 − 24. The initial values are a = 0, b = 3, f(a) = −24, f(b) = 57. From this table it can be concluded that the solution is between 1.875 and 2.0625. The algorithm might return any number in that range with an error less than 0.2. Conditioning Ill-conditioned problem: Take the function . Note that f(1.1) = 10 and f(1.001) = 1000: a change in x of less than 0.1 turns into a change in f(x) of nearly 1000. Evaluating f(x) near x = 1 is an ill-conditioned problem. Well-conditioned problem: By contrast, evaluating the same function near x = 10 is a well-conditioned problem. For instance, f(10) = 1/9 ≈ 0.111 and f(11) = 0.1: a modest change in x leads to a modest change in f(x). Discretization Furthermore, continuous problems must sometimes be replaced by a discrete problem whose solution is known to approximate that of the continuous problem; this process is called 'discretization'. For example, the solution of a differential equation is a function. This function must be represented by a finite amount of data, for instance by its value at a finite number of points at its domain, even though this domain is a continuum. Generation and propagation of errors The study of errors forms an important part of numerical analysis. There are several ways in which error can be introduced in the solution of the problem. Round-off Round-off errors arise because it is impossible to represent all real numbers exactly on a machine with finite memory (which is what all practical digital computers are). Truncation and discretization error Truncation errors are committed when an iterative method is terminated or a mathematical procedure is approximated and the approximate solution differs from the exact solution. Similarly, discretization induces a discretization error because the solution of the discrete problem does not coincide with the solution of the continuous problem. In the example above to compute the solution of , after ten iterations, the calculated root is roughly 1.99. Therefore, the truncation error is roughly 0.01. Once an error is generated, it propagates through the calculation. For example, the operation + on a computer is inexact. A calculation of the type is even more inexact. A truncation error is created when a mathematical procedure is approximated. To integrate a function exactly, an infinite sum of regions must be found, but numerically only a finite sum of regions can be found, and hence the approximation of the exact solution. Similarly, to differentiate a function, the differential element approaches zero, but numerically only a nonzero value of the differential element can be chosen. Numerical stability and well-posed problems An algorithm is called numerically stable if an error, whatever its cause, does not grow to be much larger during the calculation. This happens if the problem is well-conditioned, meaning that the solution changes by only a small amount if the problem data are changed by a small amount. To the contrary, if a problem is 'ill-conditioned', then any small error in the data will grow to be a large error. Both the original problem and the algorithm used to solve that problem can be well-conditioned or ill-conditioned, and any combination is possible. So an algorithm that solves a well-conditioned problem may be either numerically stable or numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a well-posed mathematical problem. Areas of study The field of numerical analysis includes many sub-disciplines. Some of the major ones are: Computing values of functions One of the simplest problems is the evaluation of a function at a given point. The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient. For polynomials, a better approach is using the Horner scheme, since it reduces the necessary number of multiplications and additions. Generally, it is important to estimate and control round-off errors arising from the use of floating-point arithmetic. Interpolation, extrapolation, and regression Interpolation solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points? Extrapolation is very similar to interpolation, except that now the value of the unknown function at a point which is outside the given points must be found. Regression is also similar, but it takes into account that the data are imprecise. Given some points, and a measurement of the value of some function at these points (with an error), the unknown function can be found. The least squares-method is one way to achieve this. Solving equations and systems of equations Another fundamental problem is computing the solution of some given equation. Two cases are commonly distinguished, depending on whether the equation is linear or not. For instance, the equation is linear while is not. Much effort has been put in the development of methods for solving systems of linear equations. Standard direct methods, i.e., methods that use some matrix decomposition are Gaussian elimination, LU decomposition, Cholesky decomposition for symmetric (or hermitian) and positive-definite matrix, and QR decomposition for non-square matrices. Iterative methods such as the Jacobi method, Gauss–Seidel method, successive over-relaxation and conjugate gradient method are usually preferred for large systems. General iterative methods can be developed using a matrix splitting. Root-finding algorithms are used to solve nonlinear equations (they are so named since a root of a function is an argument for which the function yields zero). If the function is differentiable and the derivative is known, then Newton's method is a popular choice. Linearization is another technique for solving nonlinear equations. Solving eigenvalue or singular value problems Several important problems can be phrased in terms of eigenvalue decompositions or singular value decompositions. For instance, the spectral image compression algorithm is based on the singular value decomposition. The corresponding tool in statistics is called principal component analysis. Optimization Optimization problems ask for the point at which a given function is maximized (or minimized). Often, the point also has to satisfy some constraints. The field of optimization is further split in several subfields, depending on the form of the objective function and the constraint. For instance, linear programming deals with the case that both the objective function and the constraints are linear. A famous method in linear programming is the simplex method. The method of Lagrange multipliers can be used to reduce optimization problems with constraints to unconstrained optimization problems. Evaluating integrals Numerical integration, in some instances also known as numerical quadrature, asks for the value of a definite integral. Popular methods use one of the Newton–Cotes formulas (like the midpoint rule or Simpson's rule) or Gaussian quadrature. These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use Monte Carlo or quasi-Monte Carlo methods (see Monte Carlo integration), or, in modestly large dimensions, the method of sparse grids. Differential equations Numerical analysis is also concerned with computing (in an approximate way) the solution of differential equations, both ordinary differential equations and partial differential equations. Partial differential equations are solved by first discretizing the equation, bringing it into a finite-dimensional subspace. This can be done by a finite element method, a finite difference method, or (particularly in engineering) a finite volume method. The theoretical justification of these methods often involves theorems from functional analysis. This reduces the problem to the solution of an algebraic equation. Software Since the late twentieth century, most algorithms are implemented in a variety of programming languages. The Netlib repository contains various collections of software routines for numerical problems, mostly in Fortran and C. Commercial products implementing many different numerical algorithms include the IMSL and NAG libraries; a free-software alternative is the GNU Scientific Library. Over the years the Royal Statistical Society published numerous algorithms in its Applied Statistics (code for these "AS" functions is here); ACM similarly, in its Transactions on Mathematical Software ("TOMS" code is here). The Naval Surface Warfare Center several times published its Library of Mathematics Subroutines (code here). There are several popular numerical computing applications such as MATLAB, TK Solver, S-PLUS, and IDL as well as free and open-source alternatives such as FreeMat, Scilab, GNU Octave (similar to Matlab), and IT++ (a C++ library). There are also programming languages such as R (similar to S-PLUS), Julia, and Python with libraries such as NumPy, SciPy and SymPy. Performance varies widely: while vector and matrix operations are usually fast, scalar loops may vary in speed by more than an order of magnitude. Many computer algebra systems such as Mathematica also benefit from the availability of arbitrary-precision arithmetic which can provide more accurate results. Also, any spreadsheet software can be used to solve simple problems relating to numerical analysis. Excel, for example, has hundreds of available functions, including for matrices, which may be used in conjunction with its built in "solver".
Mathematics
Calculus and analysis
null
21523
https://en.wikipedia.org/wiki/Neural%20network%20%28machine%20learning%29
Neural network (machine learning)
In machine learning, a neural network (also artificial neural network or neural net, abbreviated ANN or NN) is a model inspired by the structure and function of biological neural networks in animal brains. An ANN consists of connected units or nodes called artificial neurons, which loosely model the neurons in the brain. Artificial neuron models that mimic biological neurons more closely have also been recently investigated and shown to significantly improve performance. These are connected by edges, which model the synapses in the brain. Each artificial neuron receives signals from connected neurons, then processes them and sends a signal to other connected neurons. The "signal" is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs, called the activation function. The strength of the signal at each connection is determined by a weight, which adjusts during the learning process. Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly passing through multiple intermediate layers (hidden layers). A network is typically called a deep neural network if it has at least two hidden layers. Artificial neural networks are used for various tasks, including predictive modeling, adaptive control, and solving problems in artificial intelligence. They can learn from experience, and can derive conclusions from a complex and seemingly unrelated set of information. Training Neural networks are typically trained through empirical risk minimization. This method is based on the idea of optimizing the network's parameters to minimize the difference, or empirical risk, between the predicted output and the actual target values in a given dataset. Gradient-based methods such as backpropagation are usually used to estimate the parameters of the network. During the training phase, ANNs learn from labeled training data by iteratively updating their parameters to minimize a defined loss function. This method allows the network to generalize to unseen data. History Early work Today's deep neural networks are based on early work in statistics over 200 years ago. The simplest kind of feedforward neural network (FNN) is a linear network, which consists of a single layer of output nodes with linear activation functions; the inputs are fed directly to the outputs via a series of weights. The sum of the products of the weights and the inputs is calculated at each node. The mean squared errors between these calculated outputs and the given target values are minimized by creating an adjustment to the weights. This technique has been known for over two centuries as the method of least squares or linear regression. It was used as a means of finding a good rough linear fit to a set of points by Legendre (1805) and Gauss (1795) for the prediction of planetary movement. Historically, digital computers such as the von Neumann model operate via the execution of explicit instructions with access to memory by a number of processors. Some neural networks, on the other hand, originated from efforts to model information processing in biological systems through the framework of connectionism. Unlike the von Neumann model, connectionist computing does not separate memory and processing. Warren McCulloch and Walter Pitts (1943) considered a non-learning computational model for neural networks. This model paved the way for research to split into two approaches. One approach focused on biological processes while the other focused on the application of neural networks to artificial intelligence. In the late 1940s, D. O. Hebb proposed a learning hypothesis based on the mechanism of neural plasticity that became known as Hebbian learning. It was used in many early neural networks, such as Rosenblatt's perceptron and the Hopfield network. Farley and Clark (1954) used computational machines to simulate a Hebbian network. Other neural network computational machines were created by Rochester, Holland, Habit and Duda (1956). In 1958, psychologist Frank Rosenblatt described the perceptron, one of the first implemented artificial neural networks, funded by the United States Office of Naval Research. R. D. Joseph (1960) mentions an even earlier perceptron-like device by Farley and Clark: "Farley and Clark of MIT Lincoln Laboratory actually preceded Rosenblatt in the development of a perceptron-like device." However, "they dropped the subject." The perceptron raised public excitement for research in Artificial Neural Networks, causing the US government to drastically increase funding. This contributed to "the Golden Age of AI" fueled by the optimistic claims made by computer scientists regarding the ability of perceptrons to emulate human intelligence. The first perceptrons did not have adaptive hidden units. However, Joseph (1960) also discussed multilayer perceptrons with an adaptive hidden layer. Rosenblatt (1962) cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e., deep learning. Deep learning breakthroughs in the 1960s and 1970s Fundamental research was conducted on ANNs in the 1960s and 1970s. The first working deep learning algorithm was the Group method of data handling, a method to train arbitrarily deep neural networks, published by Alexey Ivakhnenko and Lapa in the Soviet Union (1965). They regarded it as a form of polynomial regression, or a generalization of Rosenblatt's perceptron. A 1971 paper described a deep network with eight layers trained by this method, which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates." The first deep learning multilayer perceptron trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned internal representations to classify non-linearily separable pattern classes. Subsequent developments in hardware and hyperparameter tunings have made end-to-end stochastic gradient descent the currently dominant training technique. In 1969, Kunihiko Fukushima introduced the ReLU (rectified linear unit) activation function. The rectifier has become the most popular activation function for deep learning. Nevertheless, research stagnated in the United States following the work of Minsky and Papert (1969), who emphasized that basic perceptrons were incapable of processing the exclusive-or circuit. This insight was irrelevant for the deep networks of Ivakhnenko (1965) and Amari (1967). In 1976 transfer learning was introduced in neural networks learning. Deep learning architectures for convolutional neural networks (CNNs) with convolutional layers and downsampling layers and weight replication began with the Neocognitron introduced by Kunihiko Fukushima in 1979, though not trained by backpropagation. Backpropagation Backpropagation is an efficient application of the chain rule derived by Gottfried Wilhelm Leibniz in 1673 to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt, but he did not know how to implement this, although Henry J. Kelley had a continuous precursor of backpropagation in 1960 in the context of control theory. In 1970, Seppo Linnainmaa published the modern form of backpropagation in his Master's thesis (1970). G.M. Ostrovski et al. republished it in 1971. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work. Convolutional neural networks Kunihiko Fukushima's convolutional neural network (CNN) architecture of 1979 also introduced max pooling, a popular downsampling procedure for CNNs. CNNs have become an essential tool for computer vision. The time delay neural network (TDNN) was introduced in 1987 by Alex Waibel to apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation. In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition. In 1989, Yann LeCun et al. created a CNN called LeNet for recognizing handwritten ZIP codes on mail. Training required 3 days. In 1990, Wei Zhang implemented a CNN on optical computing hardware. In 1991, a CNN was applied to medical image object segmentation and breast cancer detection in mammograms. LeNet-5 (1998), a 7-level CNN by Yann LeCun et al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32×32 pixel images. From 1988 onward, the use of neural networks transformed the field of protein structure prediction, in particular when the first cascading networks were trained on profiles (matrices) produced by multiple sequence alignments. Recurrent neural networks One origin of RNN was statistical mechanics. In 1972, Shun'ichi Amari proposed to modify the weights of an Ising model by Hebbian learning rule as a model of associative memory, adding in the component of learning. This was popularized as the Hopfield network by John Hopfield(1982). Another origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901, Cajal observed "recurrent semicircles" in the cerebellar cortex. Hebb considered "reverberating circuit" as an explanation for short-term memory. The McCulloch and Pitts paper (1943) considered neural networks that contains cycles, and noted that the current activity of such networks can be affected by activity indefinitely far in the past. In 1982 a recurrent neural network, with an array architecture (rather than a multilayer perceptron architecture), named Crossbar Adaptive Array used direct recurrent connections from the output to the supervisor (teaching ) inputs. In addition of computing actions (decisions), it computed internal state evaluations (emotions) of the consequence situations. Eliminating the external supervisor, it introduced the self-learning method in neural networks. In cognitive psychology, the journal American Psychologist in early 1980's carried out a debate on relation between cognition and emotion. Zajonc in 1980 stated that emotion is computed first and is independent from cognition, while Lazarus in 1982 stated that cognition is computed first and is inseparable from emotion. In 1982 the Crossbar Adaptive Array gave a neural network model of cognition-emotion relation. It was an example of a debate where an AI system, a recurrent neural network, contributed to an issue in the same time addressed by cognitive psychology. Two early influential works were the Jordan network (1986) and the Elman network (1990), which applied RNN to study cognitive psychology. In the 1980s, backpropagation did not work well for deep RNNs. To overcome this problem, in 1991, Jürgen Schmidhuber proposed the "neural sequence chunker" or "neural history compressor" which introduced the important concepts of self-supervised pre-training (the "P" in ChatGPT) and neural knowledge distillation. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time. In 1991, Sepp Hochreiter's diploma thesis identified and analyzed the vanishing gradient problem and proposed recurrent residual connections to solve it. He and Schmidhuber introduced long short-term memory (LSTM), which set accuracy records in multiple applications domains. This was not yet the modern version of LSTM, which required the forget gate, which was introduced in 1999. It became the default choice for RNN architecture. During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed by Terry Sejnowski, Peter Dayan, Geoffrey Hinton, etc., including the Boltzmann machine, restricted Boltzmann machine, Helmholtz machine, and the wake-sleep algorithm. These were designed for unsupervised learning of deep generative models. Deep learning Between 2009 and 2012, ANNs began winning prizes in image recognition contests, approaching human level performance on various tasks, initially in pattern recognition and handwriting recognition. In 2011, a CNN named DanNet by Dan Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, and Jürgen Schmidhuber achieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3. It then won more contests. They also showed how max-pooling CNNs on GPU improved performance significantly. In October 2012, AlexNet by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the large-scale ImageNet competition by a significant margin over shallow machine learning methods. Further incremental improvements included the VGG-16 network by Karen Simonyan and Andrew Zisserman and Google's Inceptionv3. In 2012, Ng and Dean created a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images. Unsupervised pre-training and increased computing power from GPUs and distributed computing allowed the use of larger networks, particularly in image and visual recognition problems, which became known as "deep learning". Radial basis function and wavelet networks were introduced in 2013. These can be shown to offer best approximation properties and have been applied in nonlinear system identification and classification applications. Generative adversarial network (GAN) (Ian Goodfellow et al., 2014) became state of the art in generative modeling during 2014–2018 period. The GAN principle was originally published in 1991 by Jürgen Schmidhuber who called it "artificial curiosity": two neural networks contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras et al. Here, the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerning deepfakes. Diffusion models (2015) eclipsed GANs in generative modeling since then, with systems such as DALL·E 2 (2022) and Stable Diffusion (2022). In 2014, the state of the art was training "very deep neural network" with 20 to 30 layers. Stacking too many layers led to a steep reduction in training accuracy, known as the "degradation" problem. In 2015, two techniques were developed to train very deep networks: the highway network was published in May 2015, and the residual neural network (ResNet) in December 2015. ResNet behaves like an open-gated Highway Net. During the 2010s, the seq2seq model was developed, and attention mechanisms were added. It led to the modern Transformer architecture in 2017 in Attention Is All You Need. It requires computation time that is quadratic in the size of the context window. Jürgen Schmidhuber's fast weight controller (1992) scales linearly and was later shown to be equivalent to the unnormalized linear Transformer. Transformers have increasingly become the model of choice for natural language processing. Many modern large language models such as ChatGPT, GPT-4, and BERT use this architecture. Models ANNs began as an attempt to exploit the architecture of the human brain to perform tasks that conventional algorithms had little success with. They soon reoriented towards improving empirical results, abandoning attempts to remain true to their biological precursors. ANNs have the ability to learn and model non-linearities and complex relationships. This is achieved by neurons being connected in various patterns, allowing the output of some neurons to become the input of others. The network forms a directed, weighted graph. An artificial neural network consists of simulated neurons. Each neuron is connected to other nodes via links like a biological axon-synapse-dendrite connection. All the nodes connected by links take in some data and use it to perform specific operations and tasks on the data. Each link has a weight, determining the strength of one node's influence on another, allowing weights to choose the signal between neurons. Artificial neurons ANNs are composed of artificial neurons which are conceptually derived from biological neurons. Each artificial neuron has inputs and produces a single output which can be sent to multiple other neurons. The inputs can be the feature values of a sample of external data, such as images or documents, or they can be the outputs of other neurons. The outputs of the final output neurons of the neural net accomplish the task, such as recognizing an object in an image. To find the output of the neuron we take the weighted sum of all the inputs, weighted by the weights of the connections from the inputs to the neuron. We add a bias term to this sum. This weighted sum is sometimes called the activation. This weighted sum is then passed through a (usually nonlinear) activation function to produce the output. The initial inputs are external data, such as images and documents. The ultimate outputs accomplish the task, such as recognizing an object in an image. Organization The neurons are typically organized into multiple layers, especially in deep learning. Neurons of one layer connect only to neurons of the immediately preceding and immediately following layers. The layer that receives external data is the input layer. The layer that produces the ultimate result is the output layer. In between them are zero or more hidden layers. Single layer and unlayered networks are also used. Between two layers, multiple connection patterns are possible. They can be 'fully connected', with every neuron in one layer connecting to every neuron in the next layer. They can be pooling, where a group of neurons in one layer connects to a single neuron in the next layer, thereby reducing the number of neurons in that layer. Neurons with only such connections form a directed acyclic graph and are known as feedforward networks. Alternatively, networks that allow connections between neurons in the same or previous layers are known as recurrent networks. Hyperparameter A hyperparameter is a constant parameter whose value is set before the learning process begins. The values of parameters are derived via learning. Examples of hyperparameters include learning rate, the number of hidden layers and batch size. The values of some hyperparameters can be dependent on those of other hyperparameters. For example, the size of some layers can depend on the overall number of layers. Learning Learning is the adaptation of the network to better handle a task by considering sample observations. Learning involves adjusting the weights (and optional thresholds) of the network to improve the accuracy of the result. This is done by minimizing the observed errors. Learning is complete when examining additional observations does not usefully reduce the error rate. Even after learning, the error rate typically does not reach 0. If after learning, the error rate is too high, the network typically must be redesigned. Practically this is done by defining a cost function that is evaluated periodically during learning. As long as its output continues to decline, learning continues. The cost is frequently defined as a statistic whose value can only be approximated. The outputs are actually numbers, so when the error is low, the difference between the output (almost certainly a cat) and the correct answer (cat) is small. Learning attempts to reduce the total of the differences across the observations. Most learning models can be viewed as a straightforward application of optimization theory and statistical estimation. Learning rate The learning rate defines the size of the corrective steps that the model takes to adjust for errors in each observation. A high learning rate shortens the training time, but with lower ultimate accuracy, while a lower learning rate takes longer, but with the potential for greater accuracy. Optimizations such as Quickprop are primarily aimed at speeding up error minimization, while other improvements mainly try to increase reliability. In order to avoid oscillation inside the network such as alternating connection weights, and to improve the rate of convergence, refinements use an adaptive learning rate that increases or decreases as appropriate. The concept of momentum allows the balance between the gradient and the previous change to be weighted such that the weight adjustment depends to some degree on the previous change. A momentum close to 0 emphasizes the gradient, while a value close to 1 emphasizes the last change. Cost function While it is possible to define a cost function ad hoc, frequently the choice is determined by the function's desirable properties (such as convexity) or because it arises from the model (e.g. in a probabilistic model the model's posterior probability can be used as an inverse cost). Backpropagation Backpropagation is a method used to adjust the connection weights to compensate for each error found during learning. The error amount is effectively divided among the connections. Technically, backprop calculates the gradient (the derivative) of the cost function associated with a given state with respect to the weights. The weight updates can be done via stochastic gradient descent or other methods, such as extreme learning machines, "no-prop" networks, training without backtracking, "weightless" networks, and non-connectionist neural networks. Learning paradigms Machine learning is commonly separated into three main learning paradigms, supervised learning, unsupervised learning and reinforcement learning. Each corresponds to a particular learning task. Supervised learning Supervised learning uses a set of paired inputs and desired outputs. The learning task is to produce the desired output for each input. In this case, the cost function is related to eliminating incorrect deductions. A commonly used cost is the mean-squared error, which tries to minimize the average squared error between the network's output and the desired output. Tasks suited for supervised learning are pattern recognition (also known as classification) and regression (also known as function approximation). Supervised learning is also applicable to sequential data (e.g., for handwriting, speech and gesture recognition). This can be thought of as learning with a "teacher", in the form of a function that provides continuous feedback on the quality of solutions obtained thus far. Unsupervised learning In unsupervised learning, input data is given along with the cost function, some function of the data and the network's output. The cost function is dependent on the task (the model domain) and any a priori assumptions (the implicit properties of the model, its parameters and the observed variables). As a trivial example, consider the model where is a constant and the cost . Minimizing this cost produces a value of that is equal to the mean of the data. The cost function can be much more complicated. Its form depends on the application: for example, in compression it could be related to the mutual information between and , whereas in statistical modeling, it could be related to the posterior probability of the model given the data (note that in both of those examples, those quantities would be maximized rather than minimized). Tasks that fall within the paradigm of unsupervised learning are in general estimation problems; the applications include clustering, the estimation of statistical distributions, compression and filtering. Reinforcement learning In applications such as playing video games, an actor takes a string of actions, receiving a generally unpredictable response from the environment after each one. The goal is to win the game, i.e., generate the most positive (lowest cost) responses. In reinforcement learning, the aim is to weight the network (devise a policy) to perform actions that minimize long-term (expected cumulative) cost. At each point in time the agent performs an action and the environment generates an observation and an instantaneous cost, according to some (usually unknown) rules. The rules and the long-term cost usually only can be estimated. At any juncture, the agent decides whether to explore new actions to uncover their costs or to exploit prior learning to proceed more quickly. Formally the environment is modeled as a Markov decision process (MDP) with states and actions . Because the state transitions are not known, probability distributions are used instead: the instantaneous cost distribution , the observation distribution and the transition distribution , while a policy is defined as the conditional distribution over actions given the observations. Taken together, the two define a Markov chain (MC). The aim is to discover the lowest-cost MC. ANNs serve as the learning component in such applications. Dynamic programming coupled with ANNs (giving neurodynamic programming) has been applied to problems such as those involved in vehicle routing, video games, natural resource management and medicine because of ANNs ability to mitigate losses of accuracy even when reducing the discretization grid density for numerically approximating the solution of control problems. Tasks that fall within the paradigm of reinforcement learning are control problems, games and other sequential decision making tasks. Self-learning Self-learning in neural networks was introduced in 1982 along with a neural network capable of self-learning named crossbar adaptive array (CAA). It is a system with only one input, situation s, and only one output, action (or behavior) a. It has neither external advice input nor external reinforcement input from the environment. The CAA computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about encountered situations. The system is driven by the interaction between cognition and emotion. Given the memory matrix, W =||w(a,s)||, the crossbar self-learning algorithm in each iteration performs the following computation: In situation s perform action a; Receive consequence situation s'; Compute emotion of being in consequence situation v(s'); Update crossbar memory w'(a,s) = w(a,s) + v(s'). The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is behavioral environment where it behaves, and the other is genetic environment, where from it initially and only once receives initial emotions about to be encountered situations in the behavioral environment. Having received the genome vector (species vector) from the genetic environment, the CAA will learn a goal-seeking behavior, in the behavioral environment that contains both desirable and undesirable situations. Neuroevolution Neuroevolution can create neural network topologies and weights using evolutionary computation. It is competitive with sophisticated gradient descent approaches. One advantage of neuroevolution is that it may be less prone to get caught in "dead ends". Stochastic neural network Stochastic neural networks originating from Sherrington–Kirkpatrick models are a type of artificial neural network built by introducing random variations into the network, either by giving the network's artificial neurons stochastic transfer functions , or by giving them stochastic weights. This makes them useful tools for optimization problems, since the random fluctuations help the network escape from local minima. Stochastic neural networks trained using a Bayesian approach are known as Bayesian neural networks. Other In a Bayesian framework, a distribution over the set of allowed models is chosen to minimize the cost. Evolutionary methods, gene expression programming, simulated annealing, expectation–maximization, non-parametric methods and particle swarm optimization are other learning algorithms. Convergent recursion is a learning algorithm for cerebellar model articulation controller (CMAC) neural networks. Modes Two modes of learning are available: stochastic and batch. In stochastic learning, each input creates a weight adjustment. In batch learning weights are adjusted based on a batch of inputs, accumulating errors over the batch. Stochastic learning introduces "noise" into the process, using the local gradient calculated from one data point; this reduces the chance of the network getting stuck in local minima. However, batch learning typically yields a faster, more stable descent to a local minimum, since each update is performed in the direction of the batch's average error. A common compromise is to use "mini-batches", small batches with samples in each batch selected stochastically from the entire data set. Types ANNs have evolved into a broad family of techniques that have advanced the state of the art across multiple domains. The simplest types have one or more static components, including number of units, number of layers, unit weights and topology. Dynamic types allow one or more of these to evolve via learning. The latter is much more complicated but can shorten learning periods and produce better results. Some types allow/require learning to be "supervised" by the operator, while others operate independently. Some types operate purely in hardware, while others are purely software and run on general purpose computers. Some of the main breakthroughs include: Convolutional neural networks that have proven particularly successful in processing visual and other two-dimensional data; where long short-term memory avoids the vanishing gradient problem and can handle signals that have a mix of low and high frequency components aiding large-vocabulary speech recognition, text-to-speech synthesis, and photo-real talking heads; Competitive networks such as generative adversarial networks in which multiple networks (of varying structure) compete with each other, on tasks such as winning a game or on deceiving the opponent about the authenticity of an input. Network design Using artificial neural networks requires an understanding of their characteristics. Choice of model: This depends on the data representation and the application. Model parameters include the number, type, and connectedness of network layers, as well as the size of each and the connection type (full, pooling, etc. ). Overly complex models learn slowly. Learning algorithm: Numerous trade-offs exist between learning algorithms. Almost any algorithm will work well with the correct hyperparameters for training on a particular data set. However, selecting and tuning an algorithm for training on unseen data requires significant experimentation. Robustness: If the model, cost function and learning algorithm are selected appropriately, the resulting ANN can become robust. Neural architecture search (NAS) uses machine learning to automate ANN design. Various approaches to NAS have designed networks that compare well with hand-designed systems. The basic search algorithm is to propose a candidate model, evaluate it against a dataset, and use the results as feedback to teach the NAS network. Available systems include AutoML and AutoKeras. scikit-learn library provides functions to help with building a deep network from scratch. We can then implement a deep network with TensorFlow or Keras. Hyperparameters must also be defined as part of the design (they are not learned), governing matters such as how many neurons are in each layer, learning rate, step, stride, depth, receptive field and padding (for CNNs), etc. Applications Because of their ability to reproduce and model nonlinear processes, artificial neural networks have found applications in many disciplines. These include: Function approximation, or regression analysis, (including time series prediction, fitness approximation, and modeling) Data processing (including filtering, clustering, blind source separation, and compression) Nonlinear system identification and control (including vehicle control, trajectory prediction, adaptive control, process control, and natural resource management) Pattern recognition (including radar systems, face identification, signal classification, novelty detection, 3D reconstruction, object recognition, and sequential decision making) Sequence recognition (including gesture, speech, and handwritten and printed text recognition) Sensor data analysis (including image analysis) Robotics (including directing manipulators and prostheses) Data mining (including knowledge discovery in databases) Finance (such as ex-ante models for specific financial long-run forecasts and artificial financial markets) Quantum chemistry General game playing Generative AI Data visualization Machine translation Social network filtering E-mail spam filtering Medical diagnosis ANNs have been used to diagnose several types of cancers and to distinguish highly invasive cancer cell lines from less invasive lines using only cell shape information. ANNs have been used to accelerate reliability analysis of infrastructures subject to natural disasters and to predict foundation settlements. It can also be useful to mitigate flood by the use of ANNs for modelling rainfall-runoff. ANNs have also been used for building black-box models in geoscience: hydrology, ocean modelling and coastal engineering, and geomorphology. ANNs have been employed in cybersecurity, with the objective to discriminate between legitimate activities and malicious ones. For example, machine learning has been used for classifying Android malware, for identifying domains belonging to threat actors and for detecting URLs posing a security risk. Research is underway on ANN systems designed for penetration testing, for detecting botnets, credit cards frauds and network intrusions. ANNs have been proposed as a tool to solve partial differential equations in physics and simulate the properties of many-body open quantum systems. In brain research ANNs have studied short-term behavior of individual neurons, the dynamics of neural circuitry arise from interactions between individual neurons and how behavior can arise from abstract neural modules that represent complete subsystems. Studies considered long-and short-term plasticity of neural systems and their relation to learning and memory from the individual neuron to the system level. It is possible to create a profile of a user's interests from pictures, using artificial neural networks trained for object recognition. Beyond their traditional applications, artificial neural networks are increasingly being utilized in interdisciplinary research, such as materials science. For instance, graph neural networks (GNNs) have demonstrated their capability in scaling deep learning for the discovery of new stable materials by efficiently predicting the total energy of crystals. This application underscores the adaptability and potential of ANNs in tackling complex problems beyond the realms of predictive modeling and artificial intelligence, opening new pathways for scientific discovery and innovation. Theoretical properties Computational power The multilayer perceptron is a universal function approximator, as proven by the universal approximation theorem. However, the proof is not constructive regarding the number of neurons required, the network topology, the weights and the learning parameters. A specific recurrent architecture with rational-valued weights (as opposed to full precision real number-valued weights) has the power of a universal Turing machine, using a finite number of neurons and standard linear connections. Further, the use of irrational values for weights results in a machine with super-Turing power. Capacity A model's "capacity" property corresponds to its ability to model any given function. It is related to the amount of information that can be stored in the network and to the notion of complexity. Two notions of capacity are known by the community. The information capacity and the VC Dimension. The information capacity of a perceptron is intensively discussed in Sir David MacKay's book which summarizes work by Thomas Cover. The capacity of a network of standard neurons (not convolutional) can be derived by four rules that derive from understanding a neuron as an electrical element. The information capacity captures the functions modelable by the network given any data as input. The second notion, is the VC dimension. VC Dimension uses the principles of measure theory and finds the maximum capacity under the best possible circumstances. This is, given input data in a specific form. As noted in, the VC Dimension for arbitrary inputs is half the information capacity of a Perceptron. The VC Dimension for arbitrary points is sometimes referred to as Memory Capacity. Convergence Models may not consistently converge on a single solution, firstly because local minima may exist, depending on the cost function and the model. Secondly, the optimization method used might not guarantee to converge when it begins far from any local minimum. Thirdly, for sufficiently large data or parameters, some methods become impractical. Another issue worthy to mention is that training may cross some Saddle point which may lead the convergence to the wrong direction. The convergence behavior of certain types of ANN architectures are more understood than others. When the width of network approaches to infinity, the ANN is well described by its first order Taylor expansion throughout training, and so inherits the convergence behavior of affine models. Another example is when parameters are small, it is observed that ANNs often fits target functions from low to high frequencies. This behavior is referred to as the spectral bias, or frequency principle, of neural networks. This phenomenon is the opposite to the behavior of some well studied iterative numerical schemes such as Jacobi method. Deeper neural networks have been observed to be more biased towards low frequency functions. Generalization and statistics Applications whose goal is to create a system that generalizes well to unseen examples, face the possibility of over-training. This arises in convoluted or over-specified systems when the network capacity significantly exceeds the needed free parameters. Two approaches address over-training. The first is to use cross-validation and similar techniques to check for the presence of over-training and to select hyperparameters to minimize the generalization error. The second is to use some form of regularization. This concept emerges in a probabilistic (Bayesian) framework, where regularization can be performed by selecting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimize over two quantities: the 'empirical risk' and the 'structural risk', which roughly corresponds to the error over the training set and the predicted error in unseen data due to overfitting. Supervised neural networks that use a mean squared error (MSE) cost function can use formal statistical methods to determine the confidence of the trained model. The MSE on a validation set can be used as an estimate for variance. This value can then be used to calculate the confidence interval of network output, assuming a normal distribution. A confidence analysis made this way is statistically valid as long as the output probability distribution stays the same and the network is not modified. By assigning a softmax activation function, a generalization of the logistic function, on the output layer of the neural network (or a softmax component in a component-based network) for categorical target variables, the outputs can be interpreted as posterior probabilities. This is useful in classification as it gives a certainty measure on classifications. The softmax activation function is: Criticism Training A common criticism of neural networks, particularly in robotics, is that they require too many training samples for real-world operation. Any learning machine needs sufficient representative examples in order to capture the underlying structure that allows it to generalize to new cases. Potential solutions include randomly shuffling training examples, by using a numerical optimization algorithm that does not take too large steps when changing the network connections following an example, grouping examples in so-called mini-batches and/or introducing a recursive least squares algorithm for CMAC. Dean Pomerleau uses a neural network to train a robotic vehicle to drive on multiple types of roads (single lane, multi-lane, dirt, etc.), and a large amount of his research is devoted to extrapolating multiple training scenarios from a single training experience, and preserving past training diversity so that the system does not become overtrained (if, for example, it is presented with a series of right turns—it should not learn to always turn right). Theory A central claim of ANNs is that they embody new and powerful general principles for processing information. These principles are ill-defined. It is often claimed that they are emergent from the network itself. This allows simple statistical association (the basic function of artificial neural networks) to be described as learning or recognition. In 1997, Alexander Dewdney, a former Scientific American columnist, commented that as a result, artificial neural networks have a "something-for-nothing quality, one that imparts a peculiar aura of laziness and a distinct lack of curiosity about just how good these computing systems are. No human hand (or mind) intervenes; solutions are found as if by magic; and no one, it seems, has learned anything". One response to Dewdney is that neural networks have been successfully used to handle many complex and diverse tasks, ranging from autonomously flying aircraft to detecting credit card fraud to mastering the game of Go. Technology writer Roger Bridgman commented: Although it is true that analyzing what has been learned by an artificial neural network is difficult, it is much easier to do so than to analyze what has been learned by a biological neural network. Moreover, recent emphasis on the explainability of AI has contributed towards the development of methods, notably those based on attention mechanisms, for visualizing and explaining learned neural networks. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering generic principles that allow a learning machine to be successful. For example, Bengio and LeCun (2007) wrote an article regarding local vs non-local learning, as well as shallow vs deep architecture. Biological brains use both shallow and deep circuits as reported by brain anatomy, displaying a wide variety of invariance. Weng argued that the brain self-wires largely according to signal statistics and therefore, a serial cascade cannot catch all major statistical dependencies. Hardware Large and effective neural networks require considerable computing resources. While the brain has hardware tailored to the task of processing signals through a graph of neurons, simulating even a simplified neuron on von Neumann architecture may consume vast amounts of memory and storage. Furthermore, the designer often needs to transmit signals through many of these connections and their associated neurons which require enormous CPU power and time. Some argue that the resurgence of neural networks in the twenty-first century is largely attributable to advances in hardware: from 1991 to 2015, computing power, especially as delivered by GPGPUs (on GPUs), has increased around a million-fold, making the standard backpropagation algorithm feasible for training networks that are several layers deeper than before. The use of accelerators such as FPGAs and GPUs can reduce training times from months to days. Neuromorphic engineering or a physical neural network addresses the hardware difficulty directly, by constructing non-von-Neumann chips to directly implement neural networks in circuitry. Another type of chip optimized for neural network processing is called a Tensor Processing Unit, or TPU. Practical counterexamples Analyzing what has been learned by an ANN is much easier than analyzing what has been learned by a biological neural network. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering general principles that allow a learning machine to be successful. For example, local vs. non-local learning and shallow vs. deep architecture. Hybrid approaches Advocates of hybrid models (combining neural networks and symbolic approaches) say that such a mixture can better capture the mechanisms of the human mind. Dataset bias Neural networks are dependent on the quality of the data they are trained on, thus low quality data with imbalanced representativeness can lead to the model learning and perpetuating societal biases. These inherited biases become especially critical when the ANNs are integrated into real-world scenarios where the training data may be imbalanced due to the scarcity of data for a specific race, gender or other attribute. This imbalance can result in the model having inadequate representation and understanding of underrepresented groups, leading to discriminatory outcomes that exacerbate societal inequalities, especially in applications like facial recognition, hiring processes, and law enforcement. For example, in 2018, Amazon had to scrap a recruiting tool because the model favored men over women for jobs in software engineering due to the higher number of male workers in the field. The program would penalize any resume with the word "woman" or the name of any women's college. However, the use of synthetic data can help reduce dataset bias and increase representation in datasets. Gallery Recent advancements and future directions Artificial neural networks (ANNs) have undergone significant advancements, particularly in their ability to model complex systems, handle large data sets, and adapt to various types of applications. Their evolution over the past few decades has been marked by a broad range of applications in fields such as image processing, speech recognition, natural language processing, finance, and medicine. Image processing In the realm of image processing, ANNs are employed in tasks such as image classification, object recognition, and image segmentation. For instance, deep convolutional neural networks (CNNs) have been important in handwritten digit recognition, achieving state-of-the-art performance. This demonstrates the ability of ANNs to effectively process and interpret complex visual information, leading to advancements in fields ranging from automated surveillance to medical imaging. Speech recognition By modeling speech signals, ANNs are used for tasks like speaker identification and speech-to-text conversion. Deep neural network architectures have introduced significant improvements in large vocabulary continuous speech recognition, outperforming traditional techniques. These advancements have enabled the development of more accurate and efficient voice-activated systems, enhancing user interfaces in technology products. Natural language processing In natural language processing, ANNs are used for tasks such as text classification, sentiment analysis, and machine translation. They have enabled the development of models that can accurately translate between languages, understand the context and sentiment in textual data, and categorize text based on content. This has implications for automated customer service, content moderation, and language understanding technologies. Control systems In the domain of control systems, ANNs are used to model dynamic systems for tasks such as system identification, control design, and optimization. For instance, deep feedforward neural networks are important in system identification and control applications. Finance ANNs are used for stock market prediction and credit scoring: In investing, ANNs can process vast amounts of financial data, recognize complex patterns, and forecast stock market trends, aiding investors and risk managers in making informed decisions. In credit scoring, ANNs offer data-driven, personalized assessments of creditworthiness, improving the accuracy of default predictions and automating the lending process. ANNs require high-quality data and careful tuning, and their "black-box" nature can pose challenges in interpretation. Nevertheless, ongoing advancements suggest that ANNs continue to play a role in finance, offering valuable insights and enhancing risk management strategies. Medicine ANNs are able to process and analyze vast medical datasets. They enhance diagnostic accuracy, especially by interpreting complex medical imaging for early disease detection, and by predicting patient outcomes for personalized treatment planning. In drug discovery, ANNs speed up the identification of potential drug candidates and predict their efficacy and safety, significantly reducing development time and costs. Additionally, their application in personalized medicine and healthcare data analysis allows tailored therapies and efficient patient care management. Ongoing research is aimed at addressing remaining challenges such as data privacy and model interpretability, as well as expanding the scope of ANN applications in medicine. Content creation ANNs such as generative adversarial networks (GAN) and transformers are used for content creation across numerous industries. This is because deep learning models are able to learn the style of an artist or musician from huge datasets and generate completely new artworks and music compositions. For instance, DALL-E is a deep neural network trained on 650 million pairs of images and texts across the internet that can create artworks based on text entered by the user. In the field of music, transformers are used to create original music for commercials and documentaries through companies such as AIVA and Jukedeck. In the marketing industry generative models are used to create personalized advertisements for consumers. Additionally, major film companies are partnering with technology companies to analyze the financial success of a film, such as the partnership between Warner Bros and technology company Cinelytic established in 2020. Furthermore, neural networks have found uses in video game creation, where Non Player Characters (NPCs) can make decisions based on all the characters currently in the game.
Technology
Artificial intelligence concepts
null
21525
https://en.wikipedia.org/wiki/Nutrition
Nutrition
Nutrition is the biochemical and physiological process by which an organism uses food to support its life. It provides organisms with nutrients, which can be metabolized to create energy and chemical structures. Failure to obtain the required amount of nutrients causes malnutrition. Nutritional science is the study of nutrition, though it typically emphasizes human nutrition. The type of organism determines what nutrients it needs and how it obtains them. Organisms obtain nutrients by consuming organic matter, consuming inorganic matter, absorbing light, or some combination of these. Some can produce nutrients internally by consuming basic elements, while some must consume other organisms to obtain pre-existing nutrients. All forms of life require carbon, energy, and water as well as various other molecules. Animals require complex nutrients such as carbohydrates, lipids, and proteins, obtaining them by consuming other organisms. Humans have developed agriculture and cooking to replace foraging and advance human nutrition. Plants acquire nutrients through the soil and the atmosphere. Fungi absorb nutrients around them by breaking them down and absorbing them through the mycelium. History Scientific analysis of food and nutrients began during the chemical revolution in the late 18th century. Chemists in the 18th and 19th centuries experimented with different elements and food sources to develop theories of nutrition. Modern nutrition science began in the 1910s as individual micronutrients began to be identified. The first vitamin to be chemically identified was thiamine in 1926, and vitamin C was identified as a protection against scurvy in 1932. The role of vitamins in nutrition was studied in the following decades. The first recommended dietary allowances for humans were developed to address fears of disease caused by food deficiencies during the Great Depression and the Second World War. Due to its importance in human health, the study of nutrition has heavily emphasized human nutrition and agriculture, while ecology is a secondary concern. Nutrients Nutrients are substances that provide energy and physical components to the organism, allowing it to survive, grow, and reproduce. Nutrients can be basic elements or complex macromolecules. Approximately 30 elements are found in organic matter, with nitrogen, carbon, and phosphorus being the most important. Macronutrients are the primary substances required by an organism, and micronutrients are substances required by an organism in trace amounts. Organic micronutrients are classified as vitamins, and inorganic micronutrients are classified as minerals. Nutrients can also be classified as essential or nonessential, with essential meaning the body cannot synthesize the nutrient on its own. Nutrients are absorbed by the cells and used in metabolic biochemical reactions. These include fueling reactions that create precursor metabolites and energy, biosynthetic reactions that convert precursor metabolites into building block molecules, polymerizations that combine these molecules into macromolecule polymers, and assembly reactions that use these polymers to construct cellular structures. Nutritional groups Organisms can be classified by how they obtain carbon and energy. Heterotrophs are organisms that obtain nutrients by consuming the carbon of other organisms, while autotrophs are organisms that produce their own nutrients from the carbon of inorganic substances like carbon dioxide. Mixotrophs are organisms that can be heterotrophs and autotrophs, including some plankton and carnivorous plants. Phototrophs obtain energy from light, while chemotrophs obtain energy by consuming chemical energy from matter. Organotrophs consume other organisms to obtain electrons, while lithotrophs obtain electrons from inorganic substances, such as water, hydrogen sulfide, dihydrogen, iron(II), sulfur, or ammonium. Prototrophs can create essential nutrients from other compounds, while auxotrophs must consume preexisting nutrients. Diet In nutrition, the diet of an organism is the sum of the foods it eats. A healthy diet improves the physical and mental health of an organism. This requires ingestion and absorption of vitamins, minerals, essential amino acids from protein and essential fatty acids from fat-containing food. Carbohydrates, protein and fat play major roles in ensuring the quality of life, health and longevity of the organism. Some cultures and religions have restrictions on what is acceptable for their diet. Nutrient cycle A nutrient cycle is a biogeochemical cycle involving the movement of inorganic matter through a combination of soil, organisms, air or water, where they are exchanged in organic matter. Energy flow is a unidirectional and noncyclic pathway, whereas the movement of mineral nutrients is cyclic. Mineral cycles include the carbon cycle, sulfur cycle, nitrogen cycle, water cycle, phosphorus cycle, and oxygen cycle, among others that continually recycle along with other mineral nutrients into productive ecological nutrition. Biogeochemical cycles that are performed by living organisms and natural processes are water, carbon, nitrogen, phosphorus, and sulfur cycles. Nutrient cycles allow these essential elements to return to the environment after being absorbed or consumed. Without proper nutrient cycling, there would be risk of change in oxygen levels, climate, and ecosystem function. Foraging Foraging is the process of seeking out nutrients in the environment. It may also be defined to include the subsequent use of the resources. Some organisms, such as animals and bacteria, can navigate to find nutrients, while others, such as plants and fungi, extend outward to find nutrients. Foraging may be random, in which the organism seeks nutrients without method, or it may be systematic, in which the organism can go directly to a food source. Organisms are able to detect nutrients through taste or other forms of nutrient sensing, allowing them to regulate nutrient intake. Optimal foraging theory is a model that explains foraging behavior as a cost–benefit analysis in which an animal must maximize the gain of nutrients while minimizing the amount of time and energy spent foraging. It was created to analyze the foraging habits of animals, but it can also be extended to other organisms. Some organisms are specialists that are adapted to forage for a single food source, while others are generalists that can consume a variety of food sources. Nutrient deficiency Nutrient deficiencies, known as malnutrition, occur when an organism does not have the nutrients that it needs. This may be caused by suddenly losing nutrients or the inability to absorb proper nutrients. Not only is malnutrition the result of a lack of necessary nutrients, but it can also be a result of other illnesses and health conditions. When this occurs, an organism will adapt by reducing energy consumption and expenditure to prolong the use of stored nutrients. It will use stored energy reserves until they are depleted, and it will then break down its own body mass for additional energy. A balanced diet includes appropriate amounts of all essential and non-essential nutrients. These can vary by age, weight, sex, physical activity levels, and more. A lack of just one essential nutrient can cause bodily harm, just as an overabundance can cause toxicity. The Daily Reference Values keep the majority of people from nutrient deficiencies. DRVs are not recommendations but a combination of nutrient references to educate professionals and policymakers on what the maximum and minimum nutrient intakes are for the average person. Food labels also use DRVs as a reference to create safe nutritional guidelines for the average healthy person. In organisms Animal Animals are heterotrophs that consume other organisms to obtain nutrients. Herbivores are animals that eat plants, carnivores are animals that eat other animals, and omnivores are animals that eat both plants and other animals. Many herbivores rely on bacterial fermentation to create digestible nutrients from indigestible plant cellulose, while obligate carnivores must eat animal meats to obtain certain vitamins or nutrients their bodies cannot otherwise synthesize. Animals generally have a higher requirement of energy in comparison to plants. The macronutrients essential to animal life are carbohydrates, amino acids, and fatty acids. All macronutrients except water are required by the body for energy, however, this is not their sole physiological function. The energy provided by macronutrients in food is measured in kilocalories, usually called Calories, where 1 Calorie is the amount of energy required to raise 1 kilogram of water by 1 degree Celsius. Carbohydrates are molecules that store significant amounts of energy. Animals digest and metabolize carbohydrates to obtain this energy. Carbohydrates are typically synthesized by plants during metabolism, and animals have to obtain most carbohydrates from nature, as they have only a limited ability to generate them. They include sugars, oligosaccharides, and polysaccharides. Glucose is the simplest form of carbohydrate. Carbohydrates are broken down to produce glucose and short-chain fatty acids, and they are the most abundant nutrients for herbivorous land animals. Carbohydrates contain 4 calories per gram. Lipids provide animals with fats and oils. They are not soluble in water, and they can store energy for an extended period of time. They can be obtained from many different plant and animal sources. Most dietary lipids are triglycerides, composed of glycerol and fatty acids. Phospholipids and sterols are found in smaller amounts. An animal's body will reduce the amount of fatty acids it produces as dietary fat intake increases, while it increases the amount of fatty acids it produces as carbohydrate intake increases. Fats contain 9 calories per gram. Protein consumed by animals is broken down to amino acids, which would be later used to synthesize new proteins. Protein is used to form cellular structures, fluids, and enzymes (biological catalysts). Enzymes are essential to most metabolic processes, as well as DNA replication, repair, and transcription. Protein contains 4 calories per gram. Much of animal behavior is governed by nutrition. Migration patterns and seasonal breeding take place in conjunction with food availability, and courtship displays are used to display an animal's health. Animals develop positive and negative associations with foods that affect their health, and they can instinctively avoid foods that have caused toxic injury or nutritional imbalances through a conditioned food aversion. Some animals, such as rats, do not seek out new types of foods unless they have a nutrient deficiency. Human Early human nutrition consisted of foraging for nutrients, like other animals, but it diverged at the beginning of the Holocene with the Neolithic Revolution, in which humans developed agriculture to produce food. The Chemical Revolution in the 18th century allowed humans to study the nutrients in foods and develop more advanced methods of food preparation. Major advances in economics and technology during the 20th century allowed mass production and food fortification to better meet the nutritional needs of humans. Human behavior is closely related to human nutrition, making it a subject of social science in addition to biology. Nutrition in humans is balanced with eating for pleasure, and optimal diet may vary depending on the demographics and health concerns of each person. Humans are omnivores that eat a variety of foods. Cultivation of cereals and production of bread has made up a key component of human nutrition since the beginning of agriculture. Early humans hunted animals for meat, and modern humans domesticate animals to consume their meat and eggs. The development of animal husbandry has also allowed humans in some cultures to consume the milk of other animals and process it into foods such as cheese. Other foods eaten by humans include nuts, seeds, fruits, and vegetables. Access to domesticated animals as well as vegetable oils has caused a significant increase in human intake of fats and oils. Humans have developed advanced methods of food processing that prevent contamination of pathogenic microorganisms and simplify the production of food. These include drying, freezing, heating, milling, pressing, packaging, refrigeration, and irradiation. Most cultures add herbs and spices to foods before eating to add flavor, though most do not significantly affect nutrition. Other additives are also used to improve the safety, quality, flavor, and nutritional content of food. Humans obtain most carbohydrates as starch from cereals, though sugar has grown in importance. Lipids can be found in animal fat, butterfat, vegetable oil, and leaf vegetables, and they are also used to increase flavor in foods. Protein can be found in virtually all foods, as it makes up cellular material, though certain methods of food processing may reduce the amount of protein in a food. Humans can also obtain energy from ethanol, which is both a food and a drug, but it provides relatively few essential nutrients and is associated with nutritional deficiencies and other health risks. In humans, poor nutrition can cause deficiency-related diseases, such as blindness, anemia, scurvy, preterm birth, stillbirth and cretinism, or nutrient-excess conditions, such as obesity and metabolic syndrome. Other conditions possibly affected by nutrition disorders include cardiovascular diseases, diabetes, and osteoporosis. Undernutrition can lead to wasting in acute cases, and stunting of marasmus in chronic cases of malnutrition. Domesticated animal In domesticated animals, such as pets, livestock, and working animals, as well as other animals in captivity, nutrition is managed by humans through animal feed. Fodder and forage are provided to livestock. Specialized pet food has been manufactured since 1860, and subsequent research and development have addressed the nutritional needs of pets. Dog food and cat food in particular are heavily studied and typically include all essential nutrients for these animals. Cats are sensitive to some common nutrients, such as taurine, and require additional nutrients derived from meat. Large-breed puppies are susceptible to overnutrition, as small-breed dog food is more energy dense than they can absorb. Plant Most plants obtain nutrients through inorganic substances absorbed from the soil or the atmosphere. Carbon, hydrogen, oxygen, nitrogen, and sulfur are essential nutrients that make up organic material in a plant and allow enzymic processes. These are absorbed ions in the soil, such as bicarbonate, nitrate, ammonium, and sulfate, or they are absorbed as gases, such as carbon dioxide, water, oxygen gas, and sulfur dioxide. Phosphorus, boron, and silicon are used for esterification. They are obtained through the soil as phosphates, boric acid, and silicic acid, respectively. Other nutrients used by plants are potassium, sodium, calcium, magnesium, manganese, chlorine, iron, copper, zinc, and molybdenum. Plants uptake essential elements from the soil through their roots and from the air (consisting of mainly nitrogen and oxygen) through their leaves. Nutrient uptake in the soil is achieved by cation exchange, wherein root hairs pump hydrogen ions (H+) into the soil through proton pumps. These hydrogen ions displace cations attached to negatively charged soil particles so that the cations are available for uptake by the root. In the leaves, stomata open to take in carbon dioxide and expel oxygen. Although nitrogen is plentiful in the Earth's atmosphere, very few plants can use this directly. Most plants, therefore, require nitrogen compounds to be present in the soil in which they grow. This is made possible by the fact that largely inert atmospheric nitrogen is changed in a nitrogen fixation process to biologically usable forms in the soil by bacteria. As these nutrients do not provide the plant with energy, they must obtain energy by other means. Green plants absorb energy from sunlight with chloroplasts and convert it to usable energy through photosynthesis. Fungus Fungi are chemoheterotrophs that consume external matter for energy. Most fungi absorb matter through the root-like mycelium, which grows through the organism's source of nutrients and can extend indefinitely. The fungus excretes extracellular enzymes to break down surrounding matter and then absorbs the nutrients through the cell wall. Fungi can be parasitic, saprophytic, or symbiotic. Parasitic fungi attach and feed on living hosts, such as animals, plants, or other fungi. Saprophytic fungi feed on dead and decomposing organisms. Symbiotic fungi grow around other organisms and exchange nutrients with them. Protist Protists include all eukaryotes that are not animals, plants, or fungi, resulting in great diversity between them. Algae are photosynthetic protists that can produce energy from light. Several types of protists use mycelium similar to those of fungi. Protozoa are heterotrophic protists, and different protozoa seek nutrients in different ways. Flagellate protozoa use a flagellum to assist in hunting for food, and some protozoa travel via infectious spores to act as parasites. Many protists are mixotrophic, having both phototrophic and heterotrophic characteristics. Mixotrophic protists will typically depend on one source of nutrients while using the other as a supplemental source or a temporary alternative when its primary source is unavailable. Prokaryote Prokaryotes, including bacteria and archaea, vary greatly in how they obtain nutrients across nutritional groups. Prokaryotes can only transport soluble compounds across their cell envelopes, but they can break down chemical components around them. Some lithotrophic prokaryotes are extremophiles that can survive in nutrient-deprived environments by breaking down inorganic matter. Phototrophic prokaryotes, such as cyanobacteria and Chloroflexia, can engage in photosynthesis to obtain energy from sunlight. This is common among bacteria that form in mats atop geothermal springs. Phototrophic prokaryotes typically obtain carbon from assimilating carbon dioxide through the Calvin cycle. Some prokaryotes, such as Bdellovibrio and Ensifer, are predatory and feed on other single-celled organisms. Predatory prokaryotes seek out other organisms through chemotaxis or random collision, merge with the organism, degrade it, and absorb the released nutrients. Predatory strategies of prokaryotes include attaching to the outer surface of the organism and degrading it externally, entering the cytoplasm of the organism, or by entering the periplasmic space of the organism. Groups of predatory prokaryotes may forgo attachment by collectively producing hydrolytic enzymes.
Biology and health sciences
Health, fitness, and medicine
null
21527
https://en.wikipedia.org/wiki/Number%20theory
Number theory
Number theory is a branch of pure mathematics devoted primarily to the study of the integers and arithmetic functions. German mathematician Carl Friedrich Gauss (1777–1855) said, "Mathematics is the queen of the sciences—and number theory is the queen of mathematics." Number theorists study prime numbers as well as the properties of mathematical objects constructed from integers (for example, rational numbers), or defined as generalizations of the integers (for example, algebraic integers). Integers can be considered either in themselves or as solutions to equations (Diophantine geometry). Questions in number theory are often best understood through the study of analytical objects (for example, the Riemann zeta function) that encode properties of the integers, primes or other number-theoretic objects in some fashion (analytic number theory). One may also study real numbers in relation to rational numbers; for example, as approximated by the latter (Diophantine approximation). The older term for number theory is arithmetic. By the early twentieth century, it had been superseded by number theory. (The word arithmetic is used by the general public to mean elementary calculations; it has also acquired other meanings in mathematical logic, as in Peano arithmetic, and computer science, as in floating-point arithmetic.) The use of the term arithmetic for number theory regained some ground in the second half of the twentieth century, arguably in part due to French influence. In particular, arithmetical is commonly preferred as an adjective to number-theoretic. History Origins Dawn of arithmetic The earliest historical find of an arithmetical nature is a fragment of a table: the broken clay tablet Plimpton 322 (Larsa, Mesopotamia, c. 1800 BC) contains a list of "Pythagorean triples", that is, integers such that . The triples are too many and too large to have been obtained by brute force. The heading over the first column reads: "The takiltum of the diagonal which has been subtracted such that the width..." The table's layout suggests that it was constructed by means of what amounts, in modern language, to the identity which is implicit in routine Old Babylonian exercises. If some other method was used, the triples were first constructed and then reordered by , presumably for actual use as a "table", for example, with a view to applications. It is not known what these applications may have been, or whether there could have been any; Babylonian astronomy, for example, truly came into its own only later. It has been suggested instead that the table was a source of numerical examples for school problems. While evidence of Babylonian number theory is only survived by the Plimpton 322 tablet, some authors assert that Babylonian algebra was exceptionally well developed and included the foundations of modern elementary algebra. Late Neoplatonic sources state that Pythagoras learned mathematics from the Babylonians. Much earlier sources state that Thales and Pythagoras traveled and studied in Egypt. In book nine of Euclid's Elements, propositions 21–34 are very probably influenced by Pythagorean teachings; it is very simple material ("odd times even is even", "if an odd number measures [= divides] an even number, then it also measures [= divides] half of it"), but it is all that is needed to prove that is irrational. Pythagorean mystics gave great importance to the odd and the even. The discovery that is irrational is credited to the early Pythagoreans (pre-Theodorus). By revealing (in modern terms) that numbers could be irrational, this discovery seems to have provoked the first foundational crisis in mathematical history; its proof or its divulgation are sometimes credited to Hippasus, who was expelled or split from the Pythagorean sect. This forced a distinction between numbers (integers and the rationals—the subjects of arithmetic), on the one hand, and lengths and proportions (which may be identified with real numbers, whether rational or not), on the other hand. The Pythagorean tradition spoke also of so-called polygonal or figurate numbers. While square numbers, cubic numbers, etc., are seen now as more natural than triangular numbers, pentagonal numbers, etc., the study of the sums of triangular and pentagonal numbers would prove fruitful in the early modern period (seventeenth to early nineteenth centuries). The Chinese remainder theorem appears as an exercise in Sunzi Suanjing (between the third and fifth centuries). (There is one important step glossed over in Sunzi's solution: it is the problem that was later solved by Āryabhaṭa's Kuṭṭaka – see below.) The result was later generalized with a complete solution called Da-yan-shu () in Qin Jiushao's 1247 Mathematical Treatise in Nine Sections which was translated into English in early nineteenth century by British missionary Alexander Wylie. There is also some numerical mysticism in Chinese mathematics, but, unlike that of the Pythagoreans, it seems to have led nowhere. Classical Greece and the early Hellenistic period Aside from a few fragments, the mathematics of Classical Greece is known to us either through the reports of contemporary non-mathematicians or through mathematical works from the early Hellenistic period. In the case of number theory, this means, by and large, Plato and Euclid, respectively. While Asian mathematics influenced Greek and Hellenistic learning, it seems to be the case that Greek mathematics is also an indigenous tradition. Eusebius, PE X, chapter 4 mentions of Pythagoras: "In fact the said Pythagoras, while busily studying the wisdom of each nation, visited Babylon, and Egypt, and all Persia, being instructed by the Magi and the priests: and in addition to these he is related to have studied under the Brahmans (these are Indian philosophers); and from some he gathered astrology, from others geometry, and arithmetic and music from others, and different things from different nations, and only from the wise men of Greece did he get nothing, wedded as they were to a poverty and dearth of wisdom: so on the contrary he himself became the author of instruction to the Greeks in the learning which he had procured from abroad." Aristotle claimed that the philosophy of Plato closely followed the teachings of the Pythagoreans, and Cicero repeats this claim: ("They say Plato learned all things Pythagorean"). Plato had a keen interest in mathematics, and distinguished clearly between arithmetic and calculation. (By arithmetic he meant, in part, theorising on number, rather than what arithmetic or number theory have come to mean.) It is through one of Plato's dialogues—namely, Theaetetus—that it is known that Theodorus had proven that are irrational. Theaetetus was, like Plato, a disciple of Theodorus's; he worked on distinguishing different kinds of incommensurables, and was thus arguably a pioneer in the study of number systems. (Book X of Euclid's Elements is described by Pappus as being largely based on Theaetetus's work.) Euclid devoted part of his Elements to prime numbers and divisibility, topics that belong unambiguously to number theory and are basic to it (Books VII to IX of Euclid's Elements). In particular, he gave an algorithm for computing the greatest common divisor of two numbers (the Euclidean algorithm; Elements, Prop. VII.2) and the first known proof of the infinitude of primes (Elements, Prop. IX.20). In 1773, Lessing published an epigram he had found in a manuscript during his work as a librarian; it claimed to be a letter sent by Archimedes to Eratosthenes. The epigram proposed what has become known as Archimedes's cattle problem; its solution (absent from the manuscript) requires solving an indeterminate quadratic equation (which reduces to what would later be misnamed Pell's equation). As far as it is known, such equations were first successfully treated by the Indian school. It is not known whether Archimedes himself had a method of solution. Diophantus Little is known about Diophantus of Alexandria; he probably lived in the third century AD, that is, about five hundred years after Euclid. Six out of the thirteen books of Diophantus's survive in the original Greek and four more survive in an Arabic translation. The is a collection of worked-out problems where the task is invariably to find rational solutions to a system of polynomial equations, usually of the form or . In modern parlance, Diophantine equations are polynomial equations to which rational or integer solutions are sought. Āryabhaṭa, Brahmagupta, Bhāskara While Greek astronomy probably influenced Indian learning, to the point of introducing trigonometry, it seems to be the case that Indian mathematics is otherwise an indigenous tradition; in particular, there is no evidence that Euclid's Elements reached India before the eighteenth century. Āryabhaṭa (476–550 AD) showed that pairs of simultaneous congruences , could be solved by a method he called kuṭṭaka, or pulveriser; this is a procedure close to (a generalisation of) the Euclidean algorithm, which was probably discovered independently in India. Āryabhaṭa seems to have had in mind applications to astronomical calculations. Brahmagupta (628 AD) started the systematic study of indefinite quadratic equations—in particular, the misnamed Pell equation, in which Archimedes may have first been interested, and which did not start to be solved in the West until the time of Fermat and Euler. Later Sanskrit authors would follow, using Brahmagupta's technical terminology. A general procedure (the chakravala, or "cyclic method") for solving Pell's equation was finally found by Jayadeva (cited in the eleventh century; his work is otherwise lost); the earliest surviving exposition appears in Bhāskara II's Bīja-gaṇita (twelfth century). Indian mathematics remained largely unknown in Europe until the late eighteenth century; Brahmagupta and Bhāskara's work was translated into English in 1817 by Henry Colebrooke. Arithmetic in the Islamic golden age In the early ninth century, the caliph Al-Ma'mun ordered translations of many Greek mathematical works and at least one Sanskrit work (the Sindhind, which may or may not be Brahmagupta's Brāhmasphuṭasiddhānta). Diophantus's main work, the Arithmetica, was translated into Arabic by Qusta ibn Luqa (820–912). Part of the treatise al-Fakhri (by al-Karajī, 953 – c. 1029) builds on it to some extent. According to Rashed Roshdi, Al-Karajī's contemporary Ibn al-Haytham knew what would later be called Wilson's theorem. Western Europe in the Middle Ages Other than a treatise on squares in arithmetic progression by Fibonacci—who traveled and studied in north Africa and Constantinople—no number theory to speak of was done in western Europe during the Middle Ages. Matters started to change in Europe in the late Renaissance, thanks to a renewed study of the works of Greek antiquity. A catalyst was the textual emendation and translation into Latin of Diophantus' Arithmetica. Early modern number theory Fermat Pierre de Fermat (1607–1665) never published his writings; in particular, his work on number theory is contained almost entirely in letters to mathematicians and in private marginal notes. In his notes and letters, he scarcely wrote any proofs—he had no models in the area. Over his lifetime, Fermat made the following contributions to the field: One of Fermat's first interests was perfect numbers (which appear in Euclid, Elements IX) and amicable numbers; these topics led him to work on integer divisors, which were from the beginning among the subjects of the correspondence (1636 onwards) that put him in touch with the mathematical community of the day. In 1638, Fermat claimed, without proof, that all whole numbers can be expressed as the sum of four squares or fewer. Fermat's little theorem (1640): if a is not divisible by a prime p, then If a and b are coprime, then is not divisible by any prime congruent to −1 modulo 4; and every prime congruent to 1 modulo 4 can be written in the form . These two statements also date from 1640; in 1659, Fermat stated to Huygens that he had proven the latter statement by the method of infinite descent. In 1657, Fermat posed the problem of solving as a challenge to English mathematicians. The problem was solved in a few months by Wallis and Brouncker. Fermat considered their solution valid, but pointed out they had provided an algorithm without a proof (as had Jayadeva and Bhaskara, though Fermat was not aware of this). He stated that a proof could be found by infinite descent. Fermat stated and proved (by infinite descent) in the appendix to Observations on Diophantus (Obs. XLV) that has no non-trivial solutions in the integers. Fermat also mentioned to his correspondents that has no non-trivial solutions, and that this could also be proven by infinite descent. The first known proof is due to Euler (1753; indeed by infinite descent). Fermat claimed (Fermat's Last Theorem) to have shown there are no solutions to for all ; this claim appears in his annotations in the margins of his copy of Diophantus. Euler The interest of Leonhard Euler (1707–1783) in number theory was first spurred in 1729, when a friend of his, the amateur Goldbach, pointed him towards some of Fermat's work on the subject. This has been called the "rebirth" of modern number theory, after Fermat's relative lack of success in getting his contemporaries' attention for the subject. Euler's work on number theory includes the following: Proofs for Fermat's statements. This includes Fermat's little theorem (generalised by Euler to non-prime moduli); the fact that if and only if ; initial work towards a proof that every integer is the sum of four squares (the first complete proof is by Joseph-Louis Lagrange (1770), soon improved by Euler himself); the lack of non-zero integer solutions to (implying the case n=4 of Fermat's last theorem, the case n=3 of which Euler also proved by a related method). Pell's equation, first misnamed by Euler. He wrote on the link between continued fractions and Pell's equation. First steps towards analytic number theory. In his work of sums of four squares, partitions, pentagonal numbers, and the distribution of prime numbers, Euler pioneered the use of what can be seen as analysis (in particular, infinite series) in number theory. Since he lived before the development of complex analysis, most of his work is restricted to the formal manipulation of power series. He did, however, do some very notable (though not fully rigorous) early work on what would later be called the Riemann zeta function. Quadratic forms. Following Fermat's lead, Euler did further research on the question of which primes can be expressed in the form , some of it prefiguring quadratic reciprocity. Diophantine equations. Euler worked on some Diophantine equations of genus 0 and 1. In particular, he studied Diophantus's work; he tried to systematise it, but the time was not yet ripe for such an endeavour—algebraic geometry was still in its infancy. He did notice there was a connection between Diophantine problems and elliptic integrals, whose study he had himself initiated. Lagrange, Legendre, and Gauss Joseph-Louis Lagrange (1736–1813) was the first to give full proofs of some of Fermat's and Euler's work and observations—for instance, the four-square theorem and the basic theory of the misnamed "Pell's equation" (for which an algorithmic solution was found by Fermat and his contemporaries, and also by Jayadeva and Bhaskara II before them.) He also studied quadratic forms in full generality (as opposed to )—defining their equivalence relation, showing how to put them in reduced form, etc. Adrien-Marie Legendre (1752–1833) was the first to state the law of quadratic reciprocity. He also conjectured what amounts to the prime number theorem and Dirichlet's theorem on arithmetic progressions. He gave a full treatment of the equation and worked on quadratic forms along the lines later developed fully by Gauss. In his old age, he was the first to prove Fermat's Last Theorem for (completing work by Peter Gustav Lejeune Dirichlet, and crediting both him and Sophie Germain). In his Disquisitiones Arithmeticae (1798), Carl Friedrich Gauss (1777–1855) proved the law of quadratic reciprocity and developed the theory of quadratic forms (in particular, defining their composition). He also introduced some basic notation (congruences) and devoted a section to computational matters, including primality tests. The last section of the Disquisitiones established a link between roots of unity and number theory: The theory of the division of the circle...which is treated in sec. 7 does not belong by itself to arithmetic, but its principles can only be drawn from higher arithmetic. In this way, Gauss arguably made a first foray towards both Évariste Galois's work and algebraic number theory. Maturity and division into subfields Starting early in the nineteenth century, the following developments gradually took place: The rise to self-consciousness of number theory (or higher arithmetic) as a field of study. The development of much of modern mathematics necessary for basic modern number theory: complex analysis, group theory, Galois theory—accompanied by greater rigor in analysis and abstraction in algebra. The rough subdivision of number theory into its modern subfields—in particular, analytic and algebraic number theory. Algebraic number theory may be said to start with the study of reciprocity and cyclotomy, but truly came into its own with the development of abstract algebra and early ideal theory and valuation theory; see below. A conventional starting point for analytic number theory is Dirichlet's theorem on arithmetic progressions (1837), whose proof introduced L-functions and involved some asymptotic analysis and a limiting process on a real variable. The first use of analytic ideas in number theory actually goes back to Euler (1730s), who used formal power series and non-rigorous (or implicit) limiting arguments. The use of complex analysis in number theory comes later: the work of Bernhard Riemann (1859) on the zeta function is the canonical starting point; Jacobi's four-square theorem (1839), which predates it, belongs to an initially different strand that has by now taken a leading role in analytic number theory (modular forms). The history of each subfield is briefly addressed in its own section below; see the main article of each subfield for fuller treatments. Many of the most interesting questions in each area remain open and are being actively worked on. Main subdivisions Elementary number theory The term elementary generally denotes a method that does not use complex analysis. For example, the prime number theorem was first proven using complex analysis in 1896, but an elementary proof was found only in 1949 by Erdős and Selberg. The term is somewhat ambiguous: for example, proofs based on complex Tauberian theorems (for example, Wiener–Ikehara) are often seen as quite enlightening but not elementary, in spite of using Fourier analysis, rather than complex analysis as such. Here as elsewhere, an elementary proof may be longer and more difficult for most readers than a non-elementary one. Number theory has the reputation of being a field many of whose results can be stated to the layperson. At the same time, the proofs of these results are not particularly accessible, in part because the range of tools they use is, if anything, unusually broad within mathematics. Analytic number theory Analytic number theory may be defined in terms of its tools, as the study of the integers by means of tools from real and complex analysis; or in terms of its concerns, as the study within number theory of estimates on size and density, as opposed to identities. Some subjects generally considered to be part of analytic number theory, for example, sieve theory, are better covered by the second rather than the first definition: some of sieve theory, for instance, uses little analysis, yet it does belong to analytic number theory. The following are examples of problems in analytic number theory: the prime number theorem, the Goldbach conjecture (or the twin prime conjecture, or the Hardy–Littlewood conjectures), the Waring problem and the Riemann hypothesis. Some of the most important tools of analytic number theory are the circle method, sieve methods and L-functions (or, rather, the study of their properties). The theory of modular forms (and, more generally, automorphic forms) also occupies an increasingly central place in the toolbox of analytic number theory. One may ask analytic questions about algebraic numbers, and use analytic means to answer such questions; it is thus that algebraic and analytic number theory intersect. For example, one may define prime ideals (generalizations of prime numbers in the field of algebraic numbers) and ask how many prime ideals there are up to a certain size. This question can be answered by means of an examination of Dedekind zeta functions, which are generalizations of the Riemann zeta function, a key analytic object at the roots of the subject. This is an example of a general procedure in analytic number theory: deriving information about the distribution of a sequence (here, prime ideals or prime numbers) from the analytic behavior of an appropriately constructed complex-valued function. Algebraic number theory An algebraic number is any complex number that is a solution to some polynomial equation with rational coefficients; for example, every solution of (say) is an algebraic number. Fields of algebraic numbers are also called algebraic number fields, or shortly number fields. Algebraic number theory studies algebraic number fields. Thus, analytic and algebraic number theory can and do overlap: the former is defined by its methods, the latter by its objects of study. It could be argued that the simplest kind of number fields (viz., quadratic fields) were already studied by Gauss, as the discussion of quadratic forms in Disquisitiones arithmeticae can be restated in terms of ideals and norms in quadratic fields. (A quadratic field consists of all numbers of the form , where and are rational numbers and is a fixed rational number whose square root is not rational.) For that matter, the eleventh-century chakravala method amounts—in modern terms—to an algorithm for finding the units of a real quadratic number field. However, neither Bhāskara nor Gauss knew of number fields as such. The grounds of the subject were set in the late nineteenth century, when ideal numbers, the theory of ideals and valuation theory were introduced; these are three complementary ways of dealing with the lack of unique factorisation in algebraic number fields. (For example, in the field generated by the rationals and , the number can be factorised both as and ; all of , , and are irreducible, and thus, in a naïve sense, analogous to primes among the integers.) The initial impetus for the development of ideal numbers (by Kummer) seems to have come from the study of higher reciprocity laws, that is, generalisations of quadratic reciprocity. Number fields are often studied as extensions of smaller number fields: a field L is said to be an extension of a field K if L contains K. (For example, the complex numbers C are an extension of the reals R, and the reals R are an extension of the rationals Q.) Classifying the possible extensions of a given number field is a difficult and partially open problem. Abelian extensions—that is, extensions L of K such that the Galois group Gal(L/K) of L over K is an abelian group—are relatively well understood. Their classification was the object of the programme of class field theory, which was initiated in the late nineteenth century (partly by Kronecker and Eisenstein) and carried out largely in 1900–1950. An example of an active area of research in algebraic number theory is Iwasawa theory. The Langlands program, one of the main current large-scale research plans in mathematics, is sometimes described as an attempt to generalise class field theory to non-abelian extensions of number fields. Diophantine geometry The central problem of Diophantine geometry is to determine when a Diophantine equation has solutions, and if it does, how many. The approach taken is to think of the solutions of an equation as a geometric object. For example, an equation in two variables defines a curve in the plane. More generally, an equation or system of equations in two or more variables defines a curve, a surface, or some other such object in -dimensional space. In Diophantine geometry, one asks whether there are any rational points (points all of whose coordinates are rationals) or integral points (points all of whose coordinates are integers) on the curve or surface. If there are any such points, the next step is to ask how many there are and how they are distributed. A basic question in this direction is whether there are finitely or infinitely many rational points on a given curve or surface. An example here may be helpful. Consider the Pythagorean equation one would like to know its rational solutions; that is, its solutions such that x and y are both rational. This is the same as asking for all integer solutions to ; any solution to the latter equation gives us a solution , to the former. It is also the same as asking for all points with rational coordinates on the curve described by (a circle of radius 1 centered on the origin). The rephrasing of questions on equations in terms of points on curves is felicitous. The finiteness or not of the number of rational or integer points on an algebraic curve (that is, rational or integer solutions to an equation , where is a polynomial in two variables) depends crucially on the genus of the curve. A major achievement of this approach is Wiles's proof of Fermat's Last Theorem, for which other geometrical notions are just as crucial. There is also the closely linked area of Diophantine approximations: given a number , determine how well it can be approximated by rational numbers. One seeks approximations that are good relative to the amount of space required to write the rational number: call (with ) a good approximation to if , where is large. This question is of special interest if is an algebraic number. If cannot be approximated well, then some equations do not have integer or rational solutions. Moreover, several concepts (especially that of height) are critical both in Diophantine geometry and in the study of Diophantine approximations. This question is also of special interest in transcendental number theory: if a number can be approximated better than any algebraic number, then it is a transcendental number. It is by this argument that and e have been shown to be transcendental. Diophantine geometry should not be confused with the geometry of numbers, which is a collection of graphical methods for answering certain questions in algebraic number theory. Arithmetic geometry, however, is a contemporary term for much the same domain as that covered by the term Diophantine geometry. The term arithmetic geometry is arguably used most often when one wishes to emphasize the connections to modern algebraic geometry (for example, in Faltings's theorem) rather than to techniques in Diophantine approximations. Other subfields The areas below date from no earlier than the mid-twentieth century, even if they are based on older material. For example, as explained below, algorithms in number theory have a long history, arguably predating the formal concept of proof. However, the modern study of computability began only in the 1930s and 1940s, while computational complexity theory emerged in the 1970s. Probabilistic number theory Probabilistic number theory starts with questions such as the following ones: Take an integer at random between one and a million. How likely is it to be prime? (this is just another way of asking how many primes there are between one and a million). How many prime divisors will have on average? What is the probability that it will have many more or many fewer divisors or prime divisors than the average? Much of probabilistic number theory can be seen as an important special case of the study of variables that are almost, but not quite, mutually independent. For example, the event that a random integer between one and a million be divisible by two and the event that it be divisible by three are almost independent, but not quite. It is sometimes said that probabilistic combinatorics uses the fact that whatever happens with probability greater than must happen sometimes; one may say with equal justice that many applications of probabilistic number theory hinge on the fact that whatever is unusual must be rare. If certain algebraic objects (say, rational or integer solutions to certain equations) can be shown to be in the tail of certain sensibly defined distributions, it follows that there must be few of them; this is a very concrete non-probabilistic statement following from a probabilistic one. At times, a non-rigorous, probabilistic approach leads to a number of heuristic algorithms and open problems, notably Cramér's conjecture. Arithmetic combinatorics Arithmetic combinatorics starts with questions like the following ones: Does a fairly "thick" infinite set contain many elements in arithmetic progression: , , say? Should it be possible to write large integers as sums of elements of ? These questions are characteristic of arithmetic combinatorics. This is a presently coalescing field; it subsumes additive number theory (which concerns itself with certain very specific sets of arithmetic significance, such as the primes or the squares) and, arguably, some of the geometry of numbers, together with some rapidly developing new material. Its focus on issues of growth and distribution accounts in part for its developing links with ergodic theory, finite group theory, model theory, and other fields. The term additive combinatorics is also used; however, the sets being studied need not be sets of integers, but rather subsets of non-commutative groups, for which the multiplication symbol, not the addition symbol, is traditionally used; they can also be subsets of rings, in which case the growth of and · may be compared. Computational number theory While the word algorithm goes back only to certain readers of al-Khwārizmī, careful descriptions of methods of solution are older than proofs: such methods (that is, algorithms) are as old as any recognisable mathematics—ancient Egyptian, Babylonian, Vedic, Chinese—whereas proofs appeared only with the Greeks of the classical period. An early case is that of what is now called the Euclidean algorithm. In its basic form (namely, as an algorithm for computing the greatest common divisor) it appears as Proposition 2 of Book VII in Elements, together with a proof of correctness. However, in the form that is often used in number theory (namely, as an algorithm for finding integer solutions to an equation , or, what is the same, for finding the quantities whose existence is assured by the Chinese remainder theorem) it first appears in the works of Āryabhaṭa (fifth to sixth centuries) as an algorithm called kuṭṭaka ("pulveriser"), without a proof of correctness. There are two main questions: "Can this be computed?" and "Can it be computed rapidly?" Anyone can test whether a number is prime or, if it is not, split it into prime factors; doing so rapidly is another matter. Fast algorithms for testing primality are now known, but, in spite of much work (both theoretical and practical), no truly fast algorithm for factoring. The difficulty of a computation can be useful: modern protocols for encrypting messages (for example, RSA) depend on functions that are known to all, but whose inverses are known only to a chosen few, and would take one too long a time to figure out on one's own. For example, these functions can be such that their inverses can be computed only if certain large integers are factorized. While many difficult computational problems outside number theory are known, most working encryption protocols nowadays are based on the difficulty of a few number-theoretical problems. Some things may not be computable at all; in fact, this can be proven in some instances. For instance, in 1970, it was proven, as a solution to Hilbert's tenth problem, that there is no Turing machine which can solve all Diophantine equations. In particular, this means that, given a computably enumerable set of axioms, there are Diophantine equations for which there is no proof, starting from the axioms, of whether the set of equations has or does not have integer solutions. (i.e., Diophantine equations for which there are no integer solutions, since, given a Diophantine equation with at least one solution, the solution itself provides a proof of the fact that a solution exists. It cannot be proven that a particular Diophantine equation is of this kind, since this would imply that it has no solutions.) Applications The number-theorist Leonard Dickson (1874–1954) said "Thank God that number theory is unsullied by any application". Such a view is no longer applicable to number theory. In 1974, Donald Knuth said "virtually every theorem in elementary number theory arises in a natural, motivated way in connection with the problem of making computers do high-speed numerical calculations". Elementary number theory is taught in discrete mathematics courses for computer scientists. It also has applications to the continuous in numerical analysis. Number theory has now several modern applications spanning diverse areas such as: Cryptography: Public-key encryption schemes such as RSA are based on the difficulty of factoring large composite numbers into their prime factors. Computer science: The fast Fourier transform (FFT) algorithm, which is used to efficiently compute the discrete Fourier transform, has important applications in signal processing and data analysis. Physics: The Riemann hypothesis has connections to the distribution of prime numbers and has been studied for its potential implications in physics. Error correction codes: The theory of finite fields and algebraic geometry have been used to construct efficient error-correcting codes. Communications: The design of cellular telephone networks requires knowledge of the theory of modular forms, which is a part of analytic number theory. Study of musical scales: the concept of "equal temperament", which is the basis for most modern Western music, involves dividing the octave into 12 equal parts. This has been studied using number theory and in particular the properties of the 12th root of 2. Prizes The American Mathematical Society awards the Cole Prize in Number Theory. Moreover, number theory is one of the three mathematical subdisciplines rewarded by the Fermat Prize.
Mathematics
Counting and numbers
null
21530
https://en.wikipedia.org/wiki/Nitroglycerin
Nitroglycerin
Nitroglycerin (NG) (alternative spelling of nitroglycerine), also known as trinitroglycerol (TNG), nitro, glyceryl trinitrate (GTN), or 1,2,3-trinitroxypropane, is a dense, colorless or pale yellow, oily, explosive liquid most commonly produced by nitrating glycerol with white fuming nitric acid under conditions appropriate to the formation of the nitric acid ester. Chemically, the substance is a nitrate ester rather than a nitro compound, but the traditional name is retained. Discovered in 1846 by Ascanio Sobrero, nitroglycerin has been used as an active ingredient in the manufacture of explosives, namely dynamite, and as such it is employed in the construction, demolition, and mining industries. It is combined with nitrocellulose to form double-based smokeless powder, used as a propellant in artillery and firearms since the 1880s. As is the case for many other explosives, nitroglycerin becomes more and more prone to exploding (i.e. spontaneous decomposition) as the temperature is increased. Upon exposure to heat above 218 °C at sea-level atmospheric pressure, nitroglycerin becomes extremely unstable and tends to explode. When placed in vacuum, it has an autoignition temperature of 270 °C instead. With a melting point of 12.8 °C, the chemical is almost always encountered as a thick and viscous fluid, changing to a crystalline solid when frozen. Although the pure compound itself is colorless, in practice the presence of nitric oxide impurities left over during production tends to give it a slight yellowish tint. Due to its high boiling point and consequently low vapor pressure (0.00026 mmHg at 20 °C), pure nitroglycerin has practically no odor at room temperature, with a sweet and burning taste when ingested. Unintentional detonation may ensue when dropped, shaken, lit on fire, rapidly heated, exposed to sunlight and ozone, subjected to sparks and electrical discharges, or roughly handled. Its sensitivity to exploding is responsible for numerous devastating industrial accidents throughout its history. The chemical's characteristic reactivity may be reduced through the addition of desensitizing agents, which makes it less likely to explode. Clay (diatomaceous earth) is an example of such an agent, forming dynamite, a much more easily handled composition. Addition of other desensitizing agents give birth to the various formulations of dynamite. Nitroglycerin has been used for over 130 years in medicine as a potent vasodilator (causing dilation of the vascular system) to treat heart conditions, such as angina pectoris and chronic heart failure. Though it was previously known that these beneficial effects are due to nitroglycerin being converted to nitric oxide, a potent venodilator, the enzyme for this conversion was only discovered to be mitochondrial aldehyde dehydrogenase (ALDH2) in 2002. Nitroglycerin is available in sublingual tablets, sprays, ointments, and patches. History Nitroglycerin was the first practical explosive produced that was stronger than black powder. It was synthesized by the Italian chemist Ascanio Sobrero in 1846, working under Théophile-Jules Pelouze at the University of Turin. Sobrero initially called his discovery pyroglycerine and warned vigorously against its use as an explosive. Nitroglycerin was adopted as a commercially useful explosive by Alfred Nobel, who experimented with safer ways to handle the dangerous compound after his younger brother, Emil Oskar Nobel, and several factory workers were killed in an explosion at the Nobels' armaments factory in 1864 in Heleneborg, Sweden. One year later, Nobel founded Alfred Nobel and Company in Germany and built an isolated factory in the Krümmel hills of Geesthacht near Hamburg. This business exported a liquid combination of nitroglycerin and gunpowder called "Blasting Oil", but this was extremely unstable and difficult to handle, as evidenced in numerous catastrophes. The buildings of the Krümmel factory were destroyed twice. In April 1866, several crates of nitroglycerin were shipped to California, three of which were destined for the Central Pacific Railroad, which planned to experiment with it as a blasting explosive to expedite the construction of the Summit Tunnel through the Sierra Nevada Mountains. One of the remaining crates exploded, destroying a Wells Fargo company office in San Francisco and killing 15 people. This led to a complete ban on the transportation of liquid nitroglycerin in California. The on-site manufacture of nitroglycerin was thus required for the remaining hard-rock drilling and blasting required for the completion of the First transcontinental railroad in North America. On Christmas Day 1867, an attempt to dispose of nine canisters of Blasting Oil that had been illegally stored at the White Swan Inn in the centre of Newcastle upon Tyne resulted in an explosion on the Town Moor that killed eight people. In June 1869, two one-ton wagons loaded with nitroglycerin, then known locally as Powder-Oil, exploded in the road at the North Wales village of Cwm-y-glo. The explosion led to the loss of six lives, many injuries and much damage to the village. Little trace was found of the two horses. The UK Government was so alarmed at the damage caused and what could have happened in a city location (these two tons were part of a larger load coming from Germany via Liverpool) that they soon passed the Nitro-Glycerine Act of 1869. Liquid nitroglycerin was widely banned elsewhere, as well, and these legal restrictions led to Alfred Nobel and his company's developing dynamite in 1867. This was made by mixing nitroglycerin with diatomaceous earth ("Kieselguhr in German) found in the Krümmel hills. Similar mixtures, such as "dualine" (1867), "lithofracteur" (1869), and "gelignite" (1875), were formed by mixing nitroglycerin with other inert absorbents, and many combinations were tried by other companies in attempts to get around Nobel's tightly held patents for dynamite. Dynamite mixtures containing nitrocellulose, which increases the viscosity of the mix, are commonly known as "gelatins". Following the discovery that amyl nitrite helped alleviate chest pain, the physician William Murrell experimented with the use of nitroglycerin to alleviate angina pectoris and to reduce the blood pressure. He began treating his patients with small diluted doses of nitroglycerin in 1878, and this treatment was soon adopted into widespread use after Murrell published his results in the journal The Lancet in 1879. A few months before his death in 1896, Alfred Nobel was prescribed nitroglycerin for this heart condition, writing to a friend: "Isn't it the irony of fate that I have been prescribed nitro-glycerin, to be taken internally! They call it Trinitrin, so as not to scare the chemist and the public." The medical establishment also used the name "glyceryl trinitrate" for the same reason. Wartime production rates Large quantities of nitroglycerin were manufactured during World War I and World War II for use as military propellants and in military engineering work. During World War I, HM Factory, Gretna, the largest propellant factory in the United Kingdom, produced about 800 tonnes of cordite RDB per week. This amount required at least 336 tonnes of nitroglycerin per week (assuming no losses in production). The Royal Navy had its own factory at the Royal Navy Cordite Factory, Holton Heath, in Dorset, England. A large cordite factory was also built in Canada during World War I. The Canadian Explosives Limited cordite factory at Nobel, Ontario, was designed to produce of cordite per month, requiring about 286 tonnes of nitroglycerin per month. Instability and desensitization In its undiluted form, nitroglycerin is a contact explosive, with physical shock causing it to explode. If it has not been adequately purified during manufacture it can degrade over time to even more unstable forms. This makes nitroglycerin highly dangerous to transport or use. In its undiluted form, it is one of the world's most powerful explosives, comparable to the more recently developed RDX and PETN. Early in its history, liquid nitroglycerin was found to be "desensitized" by freezing it at a temperature below depending on its purity. Its sensitivity to shock while frozen is somewhat unpredictable: "It is more insensitive to the shock from a fulminate cap or a rifle ball when in that condition but on the other hand it appears to be more liable to explode on breaking, crushing, tamping, etc." Frozen nitroglycerin is much less energetic than liquid, and so must be thawed before use. Thawing it out can be extremely sensitizing, especially if impurities are present or the warming is too rapid. Ethylene glycol dinitrate or another polynitrate may be added to lower the melting point and thereby avoid the necessity of thawing frozen explosive. Chemically "desensitizing" nitroglycerin is possible to a point where it can be considered about as "safe" as modern high explosives, such as by the addition of ethanol, acetone, or dinitrotoluene. The nitroglycerin may have to be extracted from the desensitizer chemical to restore its effectiveness before use, for example by adding water to draw off ethanol used as a desensitizer. Detonation When nitroglycerin explodes, the products after cooling are given by: 2 → 6 + 5 + 3 + The heat released can be calculated from the heats of formation. Using −371 kJ/mol for the heat of formation of condensed phase nitroglycerin gives 1414 kJ/mol released if forming water vapor, and 1524 if forming liquid water. The detonation velocity of nitroglycerin is 7820 meters per second, which is about 113% the speed of TNT. Accordingly, nitroglycerin is considered to be a high-brisance explosive, which is to say, it has excellent shattering ability. The heat liberated during detonation raises the temperature of the gaseous byproducts to about . With a standard enthalpy of explosive decomposition of −1414 kJ/mol and a molecular weight of 227.0865 g/mol, nitroglycerin has a specific explosive energy density of 1.488 kilocalories per gram, or 6.23 kJ/g, making nitroglycerin 49% more energetic on a mass basis than the standard definitional value assigned to TNT (precisely 1 kcal/g). Manufacturing Nitroglycerin can be produced by acid-catalyzed nitration of glycerol (glycerin). The industrial manufacturing process often reacts glycerol with a nearly 1:1 mixture of concentrated sulfuric acid and concentrated nitric acid. This can be produced by mixing white fuming nitric acid—a quite expensive pure nitric acid in which the oxides of nitrogen have been removed, as opposed to red fuming nitric acid, which contains nitrogen oxides—and concentrated sulfuric acid. More often, this mixture is attained by the cheaper method of mixing fuming sulfuric acid, also known as oleum—sulfuric acid containing excess sulfur trioxide—and azeotropic nitric acid (consisting of about 70% nitric acid, with the rest being water). The sulfuric acid produces protonated nitric acid species, which are attacked by glycerol's nucleophilic oxygen atoms. The nitro group is thus added as an ester C−O−NO2 and water is produced. This is different from an electrophilic aromatic substitution reaction in which nitronium ions are the electrophile. The addition of glycerol results in an exothermic reaction (i.e., heat is produced), as usual for mixed-acid nitrations. If the mixture becomes too hot, it results in a runaway reaction, a state of accelerated nitration accompanied by the destructive oxidation of organic materials by the hot nitric acid and the release of poisonous nitrogen dioxide gas at high risk of an explosion. Thus, the glycerin mixture is added slowly to the reaction vessel containing the mixed acid (not acid to glycerin). The nitrator is cooled with cold water or some other coolant mixture and maintained throughout the glycerin addition at about , hot enough for esterification to occur at a fast rate but cold enough to avoid runaway reaction. The nitrator vessel, often constructed of iron or lead and generally stirred with compressed air, has an emergency trap door at its base, which hangs over a large pool of very cold water and into which the whole reaction mixture (called the charge) can be dumped to prevent an explosion, a process referred to as drowning. If the temperature of the charge exceeds about (actual value varying by country) or brown fumes are seen in the nitrator's vent, then it is immediately drowned. Use as an explosive and a propellant Nitroglycerin is an oily liquid that explodes when subjected to heat, shock, or flame. The main use of nitroglycerin, by tonnage, is in explosives such as dynamite and in propellants as an ingredient. However, its sensitivity has limited the usefulness of nitroglycerin as a military explosive; less sensitive explosives such as TNT, RDX, and HMX have largely replaced it in munitions. Alfred Nobel developed the use of nitroglycerin as a blasting explosive by mixing nitroglycerin with inert absorbents, particularly "Kieselgur", or diatomaceous earth. He named this explosive dynamite and patented it in 1867. It was supplied ready for use in the form of sticks, individually wrapped in greased waterproof paper. Dynamite and similar explosives were widely adopted for civil engineering tasks, such as in drilling highway and railroad tunnels, for mining, for clearing farmland of stumps, in quarrying, and in demolition work. Likewise, military engineers have used dynamite for construction and demolition work. Nitroglycerin has been used in conjunction with hydraulic fracturing, a process used to recover oil and gas from shale formations. The technique involves displacing and detonating nitroglycerin in natural or hydraulically induced fracture systems, or displacing and detonating nitroglycerin in hydraulically induced fractures followed by wellbore shots using pelletized TNT. Nitroglycerin has an advantage over some other high explosives that on detonation it produces practically no visible smoke. Therefore, it is useful as an ingredient in the formulation of various kinds of smokeless powder. Alfred Nobel then developed ballistite, by combining nitroglycerin and guncotton. He patented it in 1887. Ballistite was adopted by a number of European governments, as a military propellant. Italy was the first to adopt it. The British government and the Commonwealth governments adopted cordite instead, which had been developed by Sir Frederick Abel and Sir James Dewar of the United Kingdom in 1889. The original Cordite Mk I consisted of 58% nitroglycerin, 37% guncotton, and 5.0% petroleum jelly. Ballistite and cordite were both manufactured in the form of "cords". Smokeless powders were originally developed using nitrocellulose as the sole explosive ingredient. Therefore, they were known as single-base propellants. A range of smokeless powders that contains both nitrocellulose and nitroglycerin, known as double-base propellants, were also developed. Smokeless powders were originally supplied only for military use, but they were also soon developed for civilian use and were quickly adopted for sports. Some are known as sporting powders. Triple-base propellants contain nitrocellulose, nitroglycerin, and nitroguanidine, but are reserved mainly for extremely high-caliber ammunition rounds such as those used in tank cannons and naval artillery. Blasting gelatin, also known as gelignite, was invented by Nobel in 1875, using nitroglycerin, wood pulp, and sodium or potassium nitrate. This was an early, low-cost, flexible explosive. Medical use Nitroglycerin belongs to a group of drugs called nitrates, which includes many other nitrates like isosorbide dinitrate (Isordil) and isosorbide mononitrate (Imdur, Ismo, Monoket). These agents all exert their effect by being converted to nitric oxide in the body by mitochondrial aldehyde dehydrogenase (ALDH2), and nitric oxide is a potent natural vasodilator. In medicine, nitroglycerin is probably most commonly prescribed for angina pectoris, a painful symptom of ischemic heart disease caused by inadequate flow of blood and oxygen to the heart and as a potent antihypertensive agent. Nitroglycerin corrects the imbalance between the flow of oxygen and blood to the heart and the heart's energy demand. There are many formulations on the market at different doses. At low doses, nitroglycerin dilates veins more than arteries, thereby reducing preload (volume of blood in the heart after filling); this is thought to be its primary mechanism of action. By decreasing preload, the heart has less blood to pump, which decreases oxygen requirement since the heart does not have to work as hard. Additionally, having a smaller preload reduces the ventricular transmural pressure (pressure exerted on the walls of the heart), which decreases the compression of heart arteries to allow more blood to flow through the heart. At higher doses, it also dilates arteries, thereby reducing afterload (decreasing the pressure against which the heart must pump). An improved ratio of myocardial oxygen demand to supply leads to the following therapeutic effects during episodes of angina pectoris: subsiding of chest pain, decrease of blood pressure, increase of heart rate, and orthostatic hypotension. Patients experiencing angina when doing certain physical activities can often prevent symptoms by taking nitroglycerin 5 to 10 minutes before the activity. Overdoses may generate methemoglobinemia. Nitroglycerin is available in tablets, ointment, solution for intravenous use, transdermal patches, or sprays administered sublingually. Some forms of nitroglycerin last much longer in the body than others. Nitroglycerin as well as the onset and duration of action of each form is different. The sublingual or tablet spray of nitroglycerin has a two minute onset and twenty five minute duration of action. The oral formulation of nitroglycerin has a thirty five minute onset and a duration of action of 4–8 hours. The transdermal patch has an onset of thirty minutes and a duration of action of ten to twelve hours. Continuous exposure to nitrates has been shown to cause the body to stop responding normally to this medicine. Experts recommend that the patches be removed at night, allowing the body a few hours to restore its responsiveness to nitrates. Shorter-acting preparations of nitroglycerin can be used several times a day with less risk of developing tolerance. Nitroglycerin was first used by William Murrell to treat angina attacks in 1878, with the discovery published that same year. Industrial exposure Infrequent exposure to high doses of nitroglycerin can cause severe headaches known as "NG head" or "bang head". These headaches can be severe enough to incapacitate some people; however, humans develop a tolerance to and dependence on nitroglycerin after long-term exposure. Although rare, withdrawal can be fatal;symptoms include chest pain and other heart problems. These symptoms may be relieved with re-exposure to nitroglycerin or other suitable organic nitrates. For workers in nitroglycerin (NTG) manufacturing facilities, the effects of withdrawal sometimes include "Sunday heart attacks" in those experiencing regular nitroglycerin exposure in the workplace, leading to the development of tolerance for the venodilating effects. Over the weekend, the workers lose the tolerance, and when they are re-exposed on Monday, the drastic vasodilation produces a fast heart rate, dizziness, and a headache. This is referred to as "Monday disease." People can be exposed to nitroglycerin in the workplace by breathing it in, skin absorption, swallowing it, or eye contact. The Occupational Safety and Health Administration has set the legal limit (permissible exposure limit) for nitroglycerin exposure in the workplace as 0.2 ppm (2 mg/m3) skin exposure over an 8-hour workday. The National Institute for Occupational Safety and Health has set a recommended exposure limit of 0.1 mg/m3 skin exposure over an 8-hour workday. At levels of 75 mg/m3, nitroglycerin is immediately dangerous to life and health.
Physical sciences
Esters and ethers
Chemistry
21544
https://en.wikipedia.org/wiki/Nuclear%20fusion
Nuclear fusion
Nuclear fusion is a reaction in which two or more atomic nuclei (for example, nuclei of hydrogen isotopes deuterium and tritium), combine to form one or more atomic nuclei and neutrons. The difference in mass between the reactants and products is manifested as either the release or absorption of energy. This difference in mass arises as a result of the difference in nuclear binding energy between the atomic nuclei before and after the fusion reaction. Nuclear fusion is the process that powers active or main-sequence stars and other high-magnitude stars, where large amounts of energy are released. A nuclear fusion process that produces atomic nuclei lighter than iron-56 or nickel-62 will generally release energy. These elements have a relatively small mass and a relatively large binding energy per nucleon. Fusion of nuclei lighter than these releases energy (an exothermic process), while the fusion of heavier nuclei results in energy retained by the product nucleons, and the resulting reaction is endothermic. The opposite is true for the reverse process, called nuclear fission. Nuclear fusion uses lighter elements, such as hydrogen and helium, which are in general more fusible; while the heavier elements, such as uranium, thorium and plutonium, are more fissionable. The extreme astrophysical event of a supernova can produce enough energy to fuse nuclei into elements heavier than iron. History American chemist William Draper Harkins was the first to propose the concept of nuclear fusion in 1915. Then in 1921, Arthur Eddington suggested hydrogen–helium fusion could be the primary source of stellar energy. Quantum tunneling was discovered by Friedrich Hund in 1927, and shortly afterwards Robert Atkinson and Fritz Houtermans used the measured masses of light elements to demonstrate that large amounts of energy could be released by fusing small nuclei. Building on the early experiments in artificial nuclear transmutation by Patrick Blackett, laboratory fusion of hydrogen isotopes was accomplished by Mark Oliphant in 1932. In the remainder of that decade, the theory of the main cycle of nuclear fusion in stars was worked out by Hans Bethe. Research into fusion for military purposes began in the early 1940s as part of the Manhattan Project. The first artificial thermonuclear fusion reaction occurred during the 1951 Greenhouse Item test of the first boosted fission weapon, which uses a small amount of deuterium–tritium gas to enhance the fission yield. The first thermonuclear weapon detonation, where the vast majority of the yield comes from fusion, was the 1952 Ivy Mike test of a liquid deuterium-fusing device. While fusion bomb detonations were loosely considered for energy production, the possibility of controlled and sustained reactions remained the scientific focus for peaceful fusion power. Research into developing controlled fusion inside fusion reactors has been ongoing since the 1930s, with Los Alamos National Laboratory's Scylla I device producing the first laboratory thermonuclear fusion in 1958, but the technology is still in its developmental phase. The US National Ignition Facility, which uses laser-driven inertial confinement fusion, was designed with a goal of achieving a fusion energy gain factor (Q) of larger than one; the first large-scale laser target experiments were performed in June 2009 and ignition experiments began in early 2011. On 13 December 2022, the United States Department of Energy announced that on 5 December 2022, they had successfully accomplished break-even fusion, "delivering 2.05 megajoules (MJ) of energy to the target, resulting in 3.15 MJ of fusion energy output." Prior to this breakthrough, controlled fusion reactions had been unable to produce break-even (self-sustaining) controlled fusion. The two most advanced approaches for it are magnetic confinement (toroid designs) and inertial confinement (laser designs). Workable designs for a toroidal reactor that theoretically will deliver ten times more fusion energy than the amount needed to heat plasma to the required temperatures are in development (see ITER). The ITER facility is expected to finish its construction phase in 2025. It will start commissioning the reactor that same year and initiate plasma experiments in 2025, but is not expected to begin full deuterium–tritium fusion until 2035. Private companies pursuing the commercialization of nuclear fusion received $2.6 billion in private funding in 2021 alone, going to many notable startups including but not limited to Commonwealth Fusion Systems, Helion Energy Inc., General Fusion, TAE Technologies Inc. and Zap Energy Inc. One of the most recent breakthroughs to date in maintaining a sustained fusion reaction occurred in France's WEST fusion reactor. It maintained a 90 million degree plasma for a record time of six minutes. This is a tokamak style reactor which is the same style as the upcoming ITER reactor. Process The release of energy with the fusion of light elements is due to the interplay of two opposing forces: the nuclear force, a manifestation of the strong interaction, which holds protons and neutrons tightly together in the atomic nucleus; and the Coulomb force, which causes positively charged protons in the nucleus to repel each other. Lighter nuclei (nuclei smaller than iron and nickel) are sufficiently small and proton-poor to allow the nuclear force to overcome the Coulomb force. This is because the nucleus is sufficiently small that all nucleons feel the short-range attractive force at least as strongly as they feel the infinite-range Coulomb repulsion. Building up nuclei from lighter nuclei by fusion releases the extra energy from the net attraction of particles. For larger nuclei, however, no energy is released, because the nuclear force is short-range and cannot act across larger nuclei. Fusion powers stars and produces virtually all elements in a process called nucleosynthesis. The Sun is a main-sequence star, and, as such, generates its energy by nuclear fusion of hydrogen nuclei into helium. In its core, the Sun fuses 620 million metric tons of hydrogen and makes 616 million metric tons of helium each second. The fusion of lighter elements in stars releases energy and the mass that always accompanies it. For example, in the fusion of two hydrogen nuclei to form helium, 0.645% of the mass is carried away in the form of kinetic energy of an alpha particle or other forms of energy, such as electromagnetic radiation. It takes considerable energy to force nuclei to fuse, even those of the lightest element, hydrogen. When accelerated to high enough speeds, nuclei can overcome this electrostatic repulsion and be brought close enough such that the attractive nuclear force is greater than the repulsive Coulomb force. The strong force grows rapidly once the nuclei are close enough, and the fusing nucleons can essentially "fall" into each other and the result is fusion; this is an exothermic process. Energy released in most nuclear reactions is much larger than in chemical reactions, because the binding energy that holds a nucleus together is greater than the energy that holds electrons to a nucleus. For example, the ionization energy gained by adding an electron to a hydrogen nucleus is —less than one-millionth of the released in the deuterium–tritium (D–T) reaction shown in the adjacent diagram. Fusion reactions have an energy density many times greater than nuclear fission; the reactions produce far greater energy per unit of mass even though individual fission reactions are generally much more energetic than individual fusion ones, which are themselves millions of times more energetic than chemical reactions. Via the mass–energy equivalence, fusion yields a 0.7% efficiency of reactant mass into energy. This can be only be exceeded by the extreme cases of the accretion process involving neutron stars or black holes, approaching 40% efficiency, and antimatter annihilation at 100% efficiency. (The complete conversion of one gram of matter would release of energy.) In stars An important fusion process is the stellar nucleosynthesis that powers stars, including the Sun. In the 20th century, it was recognized that the energy released from nuclear fusion reactions accounts for the longevity of stellar heat and light. The fusion of nuclei in a star, starting from its initial hydrogen and helium abundance, provides that energy and synthesizes new nuclei. Different reaction chains are involved, depending on the mass of the star (and therefore the pressure and temperature in its core). Around 1920, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper The Internal Constitution of the Stars. At that time, the source of stellar energy was unknown; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation . This was a particularly remarkable development since at that time fusion and thermonuclear energy had not yet been discovered, nor even that stars are largely composed of hydrogen (see metallicity). Eddington's paper reasoned that: The leading theory of stellar energy, the contraction hypothesis, should cause the rotation of a star to visibly speed up due to conservation of angular momentum. But observations of Cepheid variable stars showed this was not happening. The only other known plausible source of energy was conversion of matter to energy; Einstein had shown some years earlier that a small amount of matter was equivalent to a large amount of energy. Francis Aston had also recently shown that the mass of a helium atom was about 0.8% less than the mass of the four hydrogen atoms which would, combined, form a helium atom (according to the then-prevailing theory of atomic structure which held atomic weight to be the distinguishing property between elements; work by Henry Moseley and Antonius van den Broek would later show that nucleic charge was the distinguishing property and that a helium nucleus, therefore, consisted of two hydrogen nuclei plus additional mass). This suggested that if such a combination could happen, it would release considerable energy as a byproduct. If a star contained just 5% of fusible hydrogen, it would suffice to explain how stars got their energy. (It is now known that most 'ordinary' stars are usually made of around 70% to 75% hydrogen) Further elements might also be fused, and other scientists had speculated that stars were the "crucible" in which light elements combined to create heavy elements, but without more accurate measurements of their atomic masses nothing more could be said at the time. All of these speculations were proven correct in the following decades. The primary source of solar energy, and that of similar size stars, is the fusion of hydrogen to form helium (the proton–proton chain reaction), which occurs at a solar-core temperature of 14 million kelvin. The net result is the fusion of four protons into one alpha particle, with the release of two positrons and two neutrinos (which changes two of the protons into neutrons), and energy. In heavier stars, the CNO cycle and other processes are more important. As a star uses up a substantial fraction of its hydrogen, it begins to synthesize heavier elements. The heaviest elements are synthesized by fusion that occurs when a more massive star undergoes a violent supernova at the end of its life, a process known as supernova nucleosynthesis. Requirements A substantial energy barrier of electrostatic forces must be overcome before fusion can occur. At large distances, two naked nuclei repel one another because of the repulsive electrostatic force between their positively charged protons. If two nuclei can be brought close enough together, however, the electrostatic repulsion can be overcome by the quantum effect in which nuclei can tunnel through coulomb forces. When a nucleon such as a proton or neutron is added to a nucleus, the nuclear force attracts it to all the other nucleons of the nucleus (if the atom is small enough), but primarily to its immediate neighbors due to the short range of the force. The nucleons in the interior of a nucleus have more neighboring nucleons than those on the surface. Since smaller nuclei have a larger surface-area-to-volume ratio, the binding energy per nucleon due to the nuclear force generally increases with the size of the nucleus but approaches a limiting value corresponding to that of a nucleus with a diameter of about four nucleons. It is important to keep in mind that nucleons are quantum objects. So, for example, since two neutrons in a nucleus are identical to each other, the goal of distinguishing one from the other, such as which one is in the interior and which is on the surface, is in fact meaningless, and the inclusion of quantum mechanics is therefore necessary for proper calculations. The electrostatic force, on the other hand, is an inverse-square force, so a proton added to a nucleus will feel an electrostatic repulsion from all the other protons in the nucleus. The electrostatic energy per nucleon due to the electrostatic force thus increases without limit as nuclei atomic number grows. The net result of the opposing electrostatic and strong nuclear forces is that the binding energy per nucleon generally increases with increasing size, up to the elements iron and nickel, and then decreases for heavier nuclei. Eventually, the binding energy becomes negative and very heavy nuclei (all with more than 208 nucleons, corresponding to a diameter of about 6 nucleons) are not stable. The four most tightly bound nuclei, in decreasing order of binding energy per nucleon, are , , , and . Even though the nickel isotope, , is more stable, the iron isotope is an order of magnitude more common. This is due to the fact that there is no easy way for stars to create through the alpha process. An exception to this general trend is the helium-4 nucleus, whose binding energy is higher than that of lithium, the next heavier element. This is because protons and neutrons are fermions, which according to the Pauli exclusion principle cannot exist in the same nucleus in exactly the same state. Each proton or neutron's energy state in a nucleus can accommodate both a spin up particle and a spin down particle. Helium-4 has an anomalously large binding energy because its nucleus consists of two protons and two neutrons (it is a doubly magic nucleus), so all four of its nucleons can be in the ground state. Any additional nucleons would have to go into higher energy states. Indeed, the helium-4 nucleus is so tightly bound that it is commonly treated as a single quantum mechanical particle in nuclear physics, namely, the alpha particle. The situation is similar if two nuclei are brought together. As they approach each other, all the protons in one nucleus repel all the protons in the other. Not until the two nuclei actually come close enough for long enough so the strong attractive nuclear force can take over and overcome the repulsive electrostatic force. This can also be described as the nuclei overcoming the so-called Coulomb barrier. The kinetic energy to achieve this can be lower than the barrier itself because of quantum tunneling. The Coulomb barrier is smallest for isotopes of hydrogen, as their nuclei contain only a single positive charge. A diproton is not stable, so neutrons must also be involved, ideally in such a way that a helium nucleus, with its extremely tight binding, is one of the products. Using deuterium–tritium fuel, the resulting energy barrier is about 0.1 MeV. In comparison, the energy needed to remove an electron from hydrogen is 13.6 eV. The (intermediate) result of the fusion is an unstable 5He nucleus, which immediately ejects a neutron with 14.1 MeV. The recoil energy of the remaining 4He nucleus is 3.5 MeV, so the total energy liberated is 17.6 MeV. This is many times more than what was needed to overcome the energy barrier. The reaction cross section (σ) is a measure of the probability of a fusion reaction as a function of the relative velocity of the two reactant nuclei. If the reactants have a distribution of velocities, e.g. a thermal distribution, then it is useful to perform an average over the distributions of the product of cross-section and velocity. This average is called the 'reactivity', denoted . The reaction rate (fusions per volume per time) is times the product of the reactant number densities: If a species of nuclei is reacting with a nucleus like itself, such as the DD reaction, then the product must be replaced by . increases from virtually zero at room temperatures up to meaningful magnitudes at temperatures of 10–100 keV. At these temperatures, well above typical ionization energies (13.6 eV in the hydrogen case), the fusion reactants exist in a plasma state. The significance of as a function of temperature in a device with a particular energy confinement time is found by considering the Lawson criterion. This is an extremely challenging barrier to overcome on Earth, which explains why fusion research has taken many years to reach the current advanced technical state. Artificial fusion Thermonuclear fusion Thermonuclear fusion is the process of atomic nuclei combining or "fusing" using high temperatures to drive them close enough together for this to become possible. Such temperatures cause the matter to become a plasma and, if confined, fusion reactions may occur due to collisions with extreme thermal kinetic energies of the particles. There are two forms of thermonuclear fusion: uncontrolled, in which the resulting energy is released in an uncontrolled manner, as it is in thermonuclear weapons ("hydrogen bombs") and in most stars; and controlled, where the fusion reactions take place in an environment allowing some or all of the energy released to be harnessed for constructive purposes. Temperature is a measure of the average kinetic energy of particles, so by heating the material it will gain energy. After reaching sufficient temperature, given by the Lawson criterion, the energy of accidental collisions within the plasma is high enough to overcome the Coulomb barrier and the particles may fuse together. In a deuterium–tritium fusion reaction, for example, the energy necessary to overcome the Coulomb barrier is 0.1 MeV. Converting between energy and temperature shows that the 0.1 MeV barrier would be overcome at a temperature in excess of 1.2 billion kelvin. There are two effects that are needed to lower the actual temperature. One is the fact that temperature is the average kinetic energy, implying that some nuclei at this temperature would actually have much higher energy than 0.1 MeV, while others would be much lower. It is the nuclei in the high-energy tail of the velocity distribution that account for most of the fusion reactions. The other effect is quantum tunnelling. The nuclei do not actually have to have enough energy to overcome the Coulomb barrier completely. If they have nearly enough energy, they can tunnel through the remaining barrier. For these reasons fuel at lower temperatures will still undergo fusion events, at a lower rate. Thermonuclear fusion is one of the methods being researched in the attempts to produce fusion power. If thermonuclear fusion becomes favorable to use, it would significantly reduce the world's carbon footprint. Beam–beam or beam–target fusion Accelerator-based light-ion fusion is a technique using particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fusion reactions. Accelerating light ions is relatively easy, and can be done in an efficient manner—requiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer; fusion can be observed with as little as 10 kV between the electrodes. The system can be arranged to accelerate ions into a static fuel-infused target, known as beam–target fusion, or by accelerating two streams of ions towards each other, beam–beam fusion. The key problem with accelerator-based fusion (and with cold targets in general) is that fusion cross sections are many orders of magnitude lower than Coulomb interaction cross-sections. Therefore, the vast majority of ions expend their energy emitting bremsstrahlung radiation and the ionization of atoms of the target. Devices referred to as sealed-tube neutron generators are particularly relevant to this discussion. These small devices are miniature particle accelerators filled with deuterium and tritium gas in an arrangement that allows ions of those nuclei to be accelerated against hydride targets, also containing deuterium and tritium, where fusion takes place, releasing a flux of neutrons. Hundreds of neutron generators are produced annually for use in the petroleum industry where they are used in measurement equipment for locating and mapping oil reserves. A number of attempts to recirculate the ions that "miss" collisions have been made over the years. One of the better-known attempts in the 1970s was Migma, which used a unique particle storage ring to capture ions into circular orbits and return them to the reaction area. Theoretical calculations made during funding reviews pointed out that the system would have significant difficulty scaling up to contain enough fusion fuel to be relevant as a power source. In the 1990s, a new arrangement using a field-reversed configuration (FRC) as the storage system was proposed by Norman Rostoker and continues to be studied by TAE Technologies . A closely related approach is to merge two FRC's rotating in opposite directions, which is being actively studied by Helion Energy. Because these approaches all have ion energies well beyond the Coulomb barrier, they often suggest the use of alternative fuel cycles like p-11B that are too difficult to attempt using conventional approaches. Muon-catalyzed fusion Muon-catalyzed fusion is a fusion process that occurs at ordinary temperatures. It was studied in detail by Steven Jones in the early 1980s. Net energy production from this reaction has been unsuccessful because of the high energy required to create muons, their short 2.2 μs half-life, and the high chance that a muon will bind to the new alpha particle and thus stop catalyzing fusion. Other principles Some other confinement principles have been investigated. Antimatter-initialized fusion uses small amounts of antimatter to trigger a tiny fusion explosion. This has been studied primarily in the context of making nuclear pulse propulsion, and pure fusion bombs feasible. This is not near becoming a practical power source, due to the cost of manufacturing antimatter alone. Pyroelectric fusion was reported in April 2005 by a team at UCLA. The scientists used a pyroelectric crystal heated from , combined with a tungsten needle to produce an electric field of about 25 gigavolts per meter to ionize and accelerate deuterium nuclei into an erbium deuteride target. At the estimated energy levels, the D–D fusion reaction may occur, producing helium-3 and a 2.45 MeV neutron. Although it makes a useful neutron generator, the apparatus is not intended for power generation since it requires far more energy than it produces. D–T fusion reactions have been observed with a tritiated erbium target. Nuclear fusion–fission hybrid (hybrid nuclear power) is a proposed means of generating power by use of a combination of nuclear fusion and fission processes. The concept dates to the 1950s, and was briefly advocated by Hans Bethe during the 1970s, but largely remained unexplored until a revival of interest in 2009, due to the delays in the realization of pure fusion. Project PACER, carried out at Los Alamos National Laboratory (LANL) in the mid-1970s, explored the possibility of a fusion power system that would involve exploding small hydrogen bombs (fusion bombs) inside an underground cavity. As an energy source, the system is the only fusion power system that could be demonstrated to work using existing technology. However, it would also require a large, continuous supply of nuclear bombs, making the economics of such a system rather questionable. Bubble fusion also called sonofusion was a proposed mechanism for achieving fusion via sonic cavitation which rose to prominence in the early 2000s. Subsequent attempts at replication failed and the principal investigator, Rusi Taleyarkhan, was judged guilty of research misconduct in 2008. Confinement in thermonuclear fusion The key problem in achieving thermonuclear fusion is how to confine the hot plasma. Due to the high temperature, the plasma cannot be in direct contact with any solid material, so it has to be located in a vacuum. Also, high temperatures imply high pressures. The plasma tends to expand immediately and some force is necessary to act against it. This force can take one of three forms: gravitation in stars, magnetic forces in magnetic confinement fusion reactors, or inertial as the fusion reaction may occur before the plasma starts to expand, so the plasma's inertia is keeping the material together. Gravitational confinement One force capable of confining the fuel well enough to satisfy the Lawson criterion is gravity. The mass needed, however, is so great that gravitational confinement is only found in stars—the least massive stars capable of sustained fusion are red dwarfs, while brown dwarfs are able to fuse deuterium and lithium if they are of sufficient mass. In stars heavy enough, after the supply of hydrogen is exhausted in their cores, their cores (or a shell around the core) start fusing helium to carbon. In the most massive stars (at least 8–11 solar masses), the process is continued until some of their energy is produced by fusing lighter elements to iron. As iron has one of the highest binding energies, reactions producing heavier elements are generally endothermic. Therefore, significant amounts of heavier elements are not formed during stable periods of massive star evolution, but are formed in supernova explosions. Some lighter stars also form these elements in the outer parts of the stars over long periods of time, by absorbing energy from fusion in the inside of the star, by absorbing neutrons that are emitted from the fusion process. All of the elements heavier than iron have some potential energy to release, in theory. At the extremely heavy end of element production, these heavier elements can produce energy in the process of being split again back toward the size of iron, in the process of nuclear fission. Nuclear fission thus releases energy that has been stored, sometimes billions of years before, during stellar nucleosynthesis. Magnetic confinement Electrically charged particles (such as fuel ions) will follow magnetic field lines (see Guiding centre). The fusion fuel can therefore be trapped using a strong magnetic field. A variety of magnetic configurations exist, including the toroidal geometries of tokamaks and stellarators and open-ended mirror confinement systems. Inertial confinement A third confinement principle is to apply a rapid pulse of energy to a large part of the surface of a pellet of fusion fuel, causing it to simultaneously "implode" and heat to very high pressure and temperature. If the fuel is dense enough and hot enough, the fusion reaction rate will be high enough to burn a significant fraction of the fuel before it has dissipated. To achieve these extreme conditions, the initially cold fuel must be explosively compressed. Inertial confinement is used in the hydrogen bomb, where the driver is x-rays created by a fission bomb. Inertial confinement is also attempted in "controlled" nuclear fusion, where the driver is a laser, ion, or electron beam, or a Z-pinch. Another method is to use conventional high explosive material to compress a fuel to fusion conditions. The UTIAS explosive-driven-implosion facility was used to produce stable, centred and focused hemispherical implosions to generate neutrons from D-D reactions. The simplest and most direct method proved to be in a predetonated stoichiometric mixture of deuterium-oxygen. The other successful method was using a miniature Voitenko compressor, where a plane diaphragm was driven by the implosion wave into a secondary small spherical cavity that contained pure deuterium gas at one atmosphere. Electrostatic confinement There are also electrostatic confinement fusion devices. These devices confine ions using electrostatic fields. The best known is the fusor. This device has a cathode inside an anode wire cage. Positive ions fly towards the negative inner cage, and are heated by the electric field in the process. If they miss the inner cage they can collide and fuse. Ions typically hit the cathode, however, creating prohibitory high conduction losses. Also, fusion rates in fusors are very low due to competing physical effects, such as energy loss in the form of light radiation. Designs have been proposed to avoid the problems associated with the cage, by generating the field using a non-neutral cloud. These include a plasma oscillating device, a Penning trap and the polywell. The technology is relatively immature, however, and many scientific and engineering questions remain. The most well known Inertial electrostatic confinement approach is the fusor. Starting in 1999, a number of amateurs have been able to do amateur fusion using these homemade devices. Other IEC devices include: the Polywell, MIX POPS and Marble concepts. Important reactions Stellar reaction chains At the temperatures and densities in stellar cores, the rates of fusion reactions are notoriously slow. For example, at solar core temperature (T ≈ 15 MK) and density (160 g/cm3), the energy release rate is only 276 μW/cm3—about a quarter of the volumetric rate at which a resting human body generates heat. Thus, reproduction of stellar core conditions in a lab for nuclear fusion power production is completely impractical. Because nuclear reaction rates depend on density as well as temperature, and most fusion schemes operate at relatively low densities, those methods are strongly dependent on higher temperatures. The fusion rate as a function of temperature (exp(−E/kT)), leads to the need to achieve temperatures in terrestrial reactors 10–100 times higher than in stellar interiors: T ≈ . Criteria and candidates for terrestrial reactions In artificial fusion, the primary fuel is not constrained to be protons and higher temperatures can be used, so reactions with larger cross-sections are chosen. Another concern is the production of neutrons, which activate the reactor structure radiologically, but also have the advantages of allowing volumetric extraction of the fusion energy and tritium breeding. Reactions that release no neutrons are referred to as aneutronic. To be a useful energy source, a fusion reaction must satisfy several criteria. It must: Be exothermic This limits the reactants to the low Z (number of protons) side of the curve of binding energy. It also makes helium the most common product because of its extraordinarily tight binding, although and also show up. Involve low atomic number (Z) nuclei This is because the electrostatic repulsion that must be overcome before the nuclei are close enough to fuse ( Coulomb barrier ) is directly related to the number of protons it contains – its atomic number. Have two reactants At anything less than stellar densities, three-body collisions are too improbable. In inertial confinement, both stellar densities and temperatures are exceeded to compensate for the shortcomings of the third parameter of the Lawson criterion, ICF's very short confinement time. Have two or more products This allows simultaneous conservation of energy and momentum without relying on the electromagnetic force. Conserve both protons and neutrons The cross sections for the weak interaction are too small. Few reactions meet these criteria. The following are those with the largest cross sections: :{| border="0" |- style="height:2em;" |(1) || ||+ || ||→ || ||( ||style="text-align:right;"|||) ||+ ||n0 ||( ||style="text-align:right;"|||) |- style="height:2em;" |(2i) || ||+ || ||→ || ||( ||style="text-align:right;"|||) ||+ ||p+ ||( ||style="text-align:right;"|||) || || || || || || 50% |- style="height:2em;" |(2ii) || || || ||→ || ||( ||style="text-align:right;"|||) ||+ ||n0 ||( ||style="text-align:right;"|||) || || || || || || 50% |- style="height:2em;" |(3) || ||+ || ||→ || ||( ||style="text-align:right;"|||) ||+ ||p+ ||( ||style="text-align:right;"|||) |- style="height:2em;" |(4) || ||+ || ||→ || || || || ||+ ||2 n0 || || || || || ||+ ||style="text-align:right;"| |- style="height:2em;" |(5) || ||+ || ||→ || || || || ||+ ||2 p+ || || || || || ||+ ||style="text-align:right;"| |- style="height:2em;" |(6i) || ||+ || ||→ || || || || ||+ ||p+ ||+ ||n0 || || || ||+ ||style="text-align:right;"||| || 57% |- style="height:2em;" |(6ii) || || || ||→ || ||( ||style="text-align:right;"|||) ||+ || ||( ||style="text-align:right;"|||) || || || || || || 43% |- style="height:2em;" |(7i) || ||+ || ||→ ||2  ||+ ||style="text-align:right;"| |- style="height:2em;" |(7ii) || || || ||→ || ||+ || || ||+ ||n0 || || || || || ||+ ||style="text-align:right;"| |- style="height:2em;" |(7iii) || || || ||→ || ||+ ||p+ || || || || || || || || ||+ ||style="text-align:right;"| |- style="height:2em;" |(7iv) || || || ||→ || ||+ ||n0 || || || || || || || || ||+ ||style="text-align:right;"| |- style="height:2em;" |(8) ||p+ ||+ || ||→ || ||( ||style="text-align:right;"|||) ||+ || ||( ||style="text-align:right;"|||) |- style="height:2em;" |(9) || ||+ || ||→ ||2  ||+ ||p+ || || || || || || || || ||+ ||style="text-align:right;"| |- style="height:2em;" |(10) ||p+ ||+ || ||→ ||3  || || || || || || || || || || ||+ ||style="text-align:right;"| |} For reactions with two products, the energy is divided between them in inverse proportion to their masses, as shown. In most reactions with three products, the distribution of energy varies. For reactions that can result in more than one set of products, the branching ratios are given. Some reaction candidates can be eliminated at once. The D–6Li reaction has no advantage compared to p+– because it is roughly as difficult to burn but produces substantially more neutrons through – side reactions. There is also a p+– reaction, but the cross section is far too low, except possibly when Ti > 1 MeV, but at such high temperatures an endothermic, direct neutron-producing reaction also becomes very significant. Finally there is also a p+– reaction, which is not only difficult to burn, but can be easily induced to split into two alpha particles and a neutron. In addition to the fusion reactions, the following reactions with neutrons are important in order to "breed" tritium in "dry" fusion bombs and some proposed fusion reactors: :{| border="0" |- style="height:2em;" |n0 ||+ || ||→ || ||+ || + 4.784 MeV |- style="height:2em;" |n0 ||+ || ||→ || ||+ || + n0 − 2.467 MeV |} The latter of the two equations was unknown when the U.S. conducted the Castle Bravo fusion bomb test in 1954. Being just the second fusion bomb ever tested (and the first to use lithium), the designers of the Castle Bravo "Shrimp" had understood the usefulness of 6Li in tritium production, but had failed to recognize that 7Li fission would greatly increase the yield of the bomb. While 7Li has a small neutron cross-section for low neutron energies, it has a higher cross section above 5 MeV. The 15 Mt yield was 150% greater than the predicted 6 Mt and caused unexpected exposure to fallout. To evaluate the usefulness of these reactions, in addition to the reactants, the products, and the energy released, one needs to know something about the nuclear cross section. Any given fusion device has a maximum plasma pressure it can sustain, and an economical device would always operate near this maximum. Given this pressure, the largest fusion output is obtained when the temperature is chosen so that is a maximum. This is also the temperature at which the value of the triple product required for ignition is a minimum, since that required value is inversely proportional to (see Lawson criterion). (A plasma is "ignited" if the fusion reactions produce enough power to maintain the temperature without external heating.) This optimum temperature and the value of at that temperature is given for a few of these reactions in the following table. Note that many of the reactions form chains. For instance, a reactor fueled with and creates some , which is then possible to use in the – reaction if the energies are "right". An elegant idea is to combine the reactions (8) and (9). The from reaction (8) can react with in reaction (9) before completely thermalizing. This produces an energetic proton, which in turn undergoes reaction (8) before thermalizing. Detailed analysis shows that this idea would not work well, but it is a good example of a case where the usual assumption of a Maxwellian plasma is not appropriate. Abundance of the nuclear fusion fuels Neutronicity, confinement requirement, and power density Any of the reactions above can in principle be the basis of fusion power production. In addition to the temperature and cross section discussed above, we must consider the total energy of the fusion products Efus, the energy of the charged fusion products Ech, and the atomic number Z of the non-hydrogenic reactant. Specification of the – reaction entails some difficulties, though. To begin with, one must average over the two branches (2i) and (2ii). More difficult is to decide how to treat the and products. burns so well in a deuterium plasma that it is almost impossible to extract from the plasma. The – reaction is optimized at a much higher temperature, so the burnup at the optimum – temperature may be low. Therefore, it seems reasonable to assume the but not the gets burned up and adds its energy to the net reaction, which means the total reaction would be the sum of (2i), (2ii), and (1): 5 → + 2 n0 + + p+, Efus = 4.03 + 17.6 + 3.27 = 24.9 MeV, Ech = 4.03 + 3.5 + 0.82 = 8.35 MeV. For calculating the power of a reactor (in which the reaction rate is determined by the D–D step), we count the – fusion energy per D–D reaction as Efus = (4.03 MeV + 17.6 MeV) × 50% + (3.27 MeV) × 50% = 12.5 MeV and the energy in charged particles as Ech = (4.03 MeV + 3.5 MeV) × 50% + (0.82 MeV) × 50% = 4.2 MeV. (Note: if the tritium ion reacts with a deuteron while it still has a large kinetic energy, then the kinetic energy of the helium-4 produced may be quite different from 3.5 MeV, so this calculation of energy in charged particles is only an approximation of the average.) The amount of energy per deuteron consumed is 2/5 of this, or 5.0 MeV (a specific energy of about 225 million MJ per kilogram of deuterium). Another unique aspect of the – reaction is that there is only one reactant, which must be taken into account when calculating the reaction rate. With this choice, we tabulate parameters for four of the most important reactions The last column is the neutronicity of the reaction, the fraction of the fusion energy released as neutrons. This is an important indicator of the magnitude of the problems associated with neutrons like radiation damage, biological shielding, remote handling, and safety. For the first two reactions it is calculated as . For the last two reactions, where this calculation would give zero, the values quoted are rough estimates based on side reactions that produce neutrons in a plasma in thermal equilibrium. Of course, the reactants should also be mixed in the optimal proportions. This is the case when each reactant ion plus its associated electrons accounts for half the pressure. Assuming that the total pressure is fixed, this means that particle density of the non-hydrogenic ion is smaller than that of the hydrogenic ion by a factor . Therefore, the rate for these reactions is reduced by the same factor, on top of any differences in the values of . On the other hand, because the – reaction has only one reactant, its rate is twice as high as when the fuel is divided between two different hydrogenic species, thus creating a more efficient reaction. Thus there is a "penalty" of for non-hydrogenic fuels arising from the fact that they require more electrons, which take up pressure without participating in the fusion reaction. (It is usually a good assumption that the electron temperature will be nearly equal to the ion temperature. Some authors, however, discuss the possibility that the electrons could be maintained substantially colder than the ions. In such a case, known as a "hot ion mode", the "penalty" would not apply.) There is at the same time a "bonus" of a factor 2 for – because each ion can react with any of the other ions, not just a fraction of them. We can now compare these reactions in the following table. The maximum value of is taken from a previous table. The "penalty/bonus" factor is that related to a non-hydrogenic reactant or a single-species reaction. The values in the column "inverse reactivity" are found by dividing by the product of the second and third columns. It indicates the factor by which the other reactions occur more slowly than the – reaction under comparable conditions. The column "Lawson criterion" weights these results with Ech and gives an indication of how much more difficult it is to achieve ignition with these reactions, relative to the difficulty for the – reaction. The next-to-last column is labeled "power density" and weights the practical reactivity by Efus. The final column indicates how much lower the fusion power density of the other reactions is compared to the – reaction and can be considered a measure of the economic potential. Bremsstrahlung losses in quasineutral, isotropic plasmas The ions undergoing fusion in many systems will essentially never occur alone but will be mixed with electrons that in aggregate neutralize the ions' bulk electrical charge and form a plasma. The electrons will generally have a temperature comparable to or greater than that of the ions, so they will collide with the ions and emit x-ray radiation of 10–30 keV energy, a process known as Bremsstrahlung. The huge size of the Sun and stars means that the x-rays produced in this process will not escape and will deposit their energy back into the plasma. They are said to be opaque to x-rays. But any terrestrial fusion reactor will be optically thin for x-rays of this energy range. X-rays are difficult to reflect but they are effectively absorbed (and converted into heat) in less than mm thickness of stainless steel (which is part of a reactor's shield). This means the bremsstrahlung process is carrying energy out of the plasma, cooling it. The ratio of fusion power produced to x-ray radiation lost to walls is an important figure of merit. This ratio is generally maximized at a much higher temperature than that which maximizes the power density (see the previous subsection). The following table shows estimates of the optimum temperature and the power ratio at that temperature for several reactions: The actual ratios of fusion to Bremsstrahlung power will likely be significantly lower for several reasons. For one, the calculation assumes that the energy of the fusion products is transmitted completely to the fuel ions, which then lose energy to the electrons by collisions, which in turn lose energy by Bremsstrahlung. However, because the fusion products move much faster than the fuel ions, they will give up a significant fraction of their energy directly to the electrons. Secondly, the ions in the plasma are assumed to be purely fuel ions. In practice, there will be a significant proportion of impurity ions, which will then lower the ratio. In particular, the fusion products themselves must remain in the plasma until they have given up their energy, and will remain for some time after that in any proposed confinement scheme. Finally, all channels of energy loss other than Bremsstrahlung have been neglected. The last two factors are related. On theoretical and experimental grounds, particle and energy confinement seem to be closely related. In a confinement scheme that does a good job of retaining energy, fusion products will build up. If the fusion products are efficiently ejected, then energy confinement will be poor, too. The temperatures maximizing the fusion power compared to the Bremsstrahlung are in every case higher than the temperature that maximizes the power density and minimizes the required value of the fusion triple product. This will not change the optimum operating point for – very much because the Bremsstrahlung fraction is low, but it will push the other fuels into regimes where the power density relative to – is even lower and the required confinement even more difficult to achieve. For – and –, Bremsstrahlung losses will be a serious, possibly prohibitive problem. For –, p+– and p+– the Bremsstrahlung losses appear to make a fusion reactor using these fuels with a quasineutral, isotropic plasma impossible. Some ways out of this dilemma have been considered but rejected. This limitation does not apply to non-neutral and anisotropic plasmas; however, these have their own challenges to contend with. Mathematical description of cross section Fusion under classical physics In a classical picture, nuclei can be understood as hard spheres that repel each other through the Coulomb force but fuse once the two spheres come close enough for contact. Estimating the radius of an atomic nuclei as about one femtometer, the energy needed for fusion of two hydrogen is: This would imply that for the core of the sun, which has a Boltzmann distribution with a temperature of around 1.4 keV, the probability hydrogen would reach the threshold is , that is, fusion would never occur. However, fusion in the sun does occur due to quantum mechanics. Parameterization of cross section The probability that fusion occurs is greatly increased compared to the classical picture, thanks to the smearing of the effective radius as the de Broglie wavelength as well as quantum tunneling through the potential barrier. To determine the rate of fusion reactions, the value of most interest is the cross section, which describes the probability that particles will fuse by giving a characteristic area of interaction. An estimation of the fusion cross-sectional area is often broken into three pieces: where is the geometric cross section, is the barrier transparency and is the reaction characteristics of the reaction. is of the order of the square of the de Broglie wavelength where is the reduced mass of the system and is the center of mass energy of the system. can be approximated by the Gamow transparency, which has the form: where is the Gamow factor and comes from estimating the quantum tunneling probability through the potential barrier. contains all the nuclear physics of the specific reaction and takes very different values depending on the nature of the interaction. However, for most reactions, the variation of is small compared to the variation from the Gamow factor and so is approximated by a function called the astrophysical S-factor, , which is weakly varying in energy. Putting these dependencies together, one approximation for the fusion cross section as a function of energy takes the form: More detailed forms of the cross-section can be derived through nuclear physics-based models and R-matrix theory. Formulas of fusion cross sections The Naval Research Lab's plasma physics formulary gives the total cross section in barns as a function of the energy (in keV) of the incident particle towards a target ion at rest fit by the formula: with the following coefficient values: Bosch-Hale also reports a R-matrix calculated cross sections fitting observation data with Padé rational approximating coefficients. With energy in units of keV and cross sections in units of millibarn, the factor has the form: , with the coefficient values: where Maxwell-averaged nuclear cross sections In fusion systems that are in thermal equilibrium, the particles are in a Maxwell–Boltzmann distribution, meaning the particles have a range of energies centered around the plasma temperature. The sun, magnetically confined plasmas and inertial confinement fusion systems are well modeled to be in thermal equilibrium. In these cases, the value of interest is the fusion cross-section averaged across the Maxwell–Boltzmann distribution. The Naval Research Lab's plasma physics formulary tabulates Maxwell averaged fusion cross sections reactivities in . For energies the data can be represented by: with in units of keV.
Physical sciences
Nuclear physics
null
21562
https://en.wikipedia.org/wiki/NP%20%28complexity%29
NP (complexity)
In computational complexity theory, NP (nondeterministic polynomial time) is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine. NP is the set of decision problems solvable in polynomial time by a nondeterministic Turing machine. NP is the set of decision problems verifiable in polynomial time by a deterministic Turing machine. The first definition is the basis for the abbreviation NP; "nondeterministic, polynomial time". These two definitions are equivalent because the algorithm based on the Turing machine consists of two phases, the first of which consists of a guess about the solution, which is generated in a nondeterministic way, while the second phase consists of a deterministic algorithm that verifies whether the guess is a solution to the problem. It is easy to see that the complexity class P (all problems solvable, deterministically, in polynomial time) is contained in NP (problems where solutions can be verified in polynomial time), because if a problem is solvable in polynomial time, then a solution is also verifiable in polynomial time by simply solving the problem. It is widely believed, but not proven, that P is smaller than NP, in other words, that decision problems exist that cannot be solved in polynomial time even though their solutions can be checked in polynomial time. The hardest problems in NP are called NP-complete problems. An algorithm solving such a problem in polynomial time is also able to solve any other NP problem in polynomial time. If P were in fact equal to NP, then a polynomial-time algorithm would exist for solving NP-complete, and by corollary, all NP problems. The complexity class NP is related to the complexity class co-NP, for which the answer "no" can be verified in polynomial time. Whether or not is another outstanding question in complexity theory. Formal definition The complexity class NP can be defined in terms of NTIME as follows: where is the set of decision problems that can be solved by a nondeterministic Turing machine in time. Alternatively, NP can be defined using deterministic Turing machines as verifiers. A language L is in NP if and only if there exist polynomials p and q, and a deterministic Turing machine M, such that For all x and y, the machine M runs in time p(|x|) on input . For all x in L, there exists a string y of length q(|x|) such that . For all x not in L and all strings y of length q(|x|), . Background Many computer science problems are contained in NP, like decision versions of many search and optimization problems. Verifier-based definition In order to explain the verifier-based definition of NP, consider the subset sum problem: Assume that we are given some integers, {−7, −3, −2, 5, 8}, and we wish to know whether some of these integers sum up to zero. Here the answer is "yes", since the integers {−3, −2, 5} corresponds to the sum To answer whether some of the integers add to zero we can create an algorithm that obtains all the possible subsets. As the number of integers that we feed into the algorithm becomes larger, both the number of subsets and the computation time grows exponentially. But notice that if we are given a particular subset, we can efficiently verify whether the subset sum is zero, by summing the integers of the subset. If the sum is zero, that subset is a proof or witness for the answer is "yes". An algorithm that verifies whether a given subset has sum zero is a verifier. Clearly, summing the integers of a subset can be done in polynomial time, and the subset sum problem is therefore in NP. The above example can be generalized for any decision problem. Given any instance I of problem and witness W, if there exists a verifier V so that given the ordered pair (I, W) as input, V returns "yes" in polynomial time if the witness proves that the answer is "yes" or "no" in polynomial time otherwise, then is in NP. The "no"-answer version of this problem is stated as: "given a finite set of integers, does every non-empty subset have a nonzero sum?". The verifier-based definition of NP does not require an efficient verifier for the "no"-answers. The class of problems with such verifiers for the "no"-answers is called co-NP. In fact, it is an open question whether all problems in NP also have verifiers for the "no"-answers and thus are in co-NP. In some literature the verifier is called the "certifier", and the witness the "certificate". Machine-definition Equivalent to the verifier-based definition is the following characterization: NP is the class of decision problems solvable by a nondeterministic Turing machine that runs in polynomial time. That is to say, a decision problem is in NP whenever is recognized by some polynomial-time nondeterministic Turing machine with an existential acceptance condition, meaning that if and only if some computation path of leads to an accepting state. This definition is equivalent to the verifier-based definition because a nondeterministic Turing machine could solve an NP problem in polynomial time by nondeterministically selecting a certificate and running the verifier on the certificate. Similarly, if such a machine exists, then a polynomial time verifier can naturally be constructed from it. In this light, we can define co-NP dually as the class of decision problems recognizable by polynomial-time nondeterministic Turing machines with an existential rejection condition. Since an existential rejection condition is exactly the same thing as a universal acceptance condition, we can understand the NP vs. co-NP question as asking whether the existential and universal acceptance conditions have the same expressive power for the class of polynomial-time nondeterministic Turing machines. Properties NP is closed under union, intersection, concatenation, Kleene star and reversal. It is not known whether NP is closed under complement (this question is the so-called "NP versus co-NP" question). Why some NP problems are hard to solve Because of the many important problems in this class, there have been extensive efforts to find polynomial-time algorithms for problems in NP. However, there remain a large number of problems in NP that defy such attempts, seeming to require super-polynomial time. Whether these problems are not decidable in polynomial time is one of the greatest open questions in computer science (see P versus NP ("P = NP") problem for an in-depth discussion). An important notion in this context is the set of NP-complete decision problems, which is a subset of NP and might be informally described as the "hardest" problems in NP. If there is a polynomial-time algorithm for even one of them, then there is a polynomial-time algorithm for all the problems in NP. Because of this, and because dedicated research has failed to find a polynomial algorithm for any NP-complete problem, once a problem has been proven to be NP-complete, this is widely regarded as a sign that a polynomial algorithm for this problem is unlikely to exist. However, in practical uses, instead of spending computational resources looking for an optimal solution, a good enough (but potentially suboptimal) solution may often be found in polynomial time. Also, the real-life applications of some problems are easier than their theoretical equivalents. Equivalence of definitions The two definitions of NP as the class of problems solvable by a nondeterministic Turing machine (TM) in polynomial time and the class of problems verifiable by a deterministic Turing machine in polynomial time are equivalent. The proof is described by many textbooks, for example, Sipser's Introduction to the Theory of Computation, section 7.3. To show this, first, suppose we have a deterministic verifier. A non-deterministic machine can simply nondeterministically run the verifier on all possible proof strings (this requires only polynomially many steps because it can nondeterministically choose the next character in the proof string in each step, and the length of the proof string must be polynomially bounded). If any proof is valid, some path will accept; if no proof is valid, the string is not in the language and it will reject. Conversely, suppose we have a non-deterministic TM called A accepting a given language L. At each of its polynomially many steps, the machine's computation tree branches in at most a finite number of directions. There must be at least one accepting path, and the string describing this path is the proof supplied to the verifier. The verifier can then deterministically simulate A, following only the accepting path, and verifying that it accepts at the end. If A rejects the input, there is no accepting path, and the verifier will always reject. Relationship to other classes NP contains all problems in P, since one can verify any instance of the problem by simply ignoring the proof and solving it. NP is contained in PSPACE—to show this, it suffices to construct a PSPACE machine that loops over all proof strings and feeds each one to a polynomial-time verifier. Since a polynomial-time machine can only read polynomially many bits, it cannot use more than polynomial space, nor can it read a proof string occupying more than polynomial space (so we do not have to consider proofs longer than this). NP is also contained in EXPTIME, since the same algorithm operates in exponential time. co-NP contains those problems that have a simple proof for no instances, sometimes called counterexamples. For example, primality testing trivially lies in co-NP, since one can refute the primality of an integer by merely supplying a nontrivial factor. NP and co-NP together form the first level in the polynomial hierarchy, higher only than P. NP is defined using only deterministic machines. If we permit the verifier to be probabilistic (this, however, is not necessarily a BPP machine), we get the class MA solvable using an Arthur–Merlin protocol with no communication from Arthur to Merlin. The relationship between BPP and NP is unknown: it is not known whether BPP is a subset of NP, NP is a subset of BPP or neither. If NP is contained in BPP, which is considered unlikely since it would imply practical solutions for NP-complete problems, then NP = RP and PH ⊆ BPP. NP is a class of decision problems; the analogous class of function problems is FNP. The only known strict inclusions come from the time hierarchy theorem and the space hierarchy theorem, and respectively they are and . Other characterizations In terms of descriptive complexity theory, NP corresponds precisely to the set of languages definable by existential second-order logic (Fagin's theorem). NP can be seen as a very simple type of interactive proof system, where the prover comes up with the proof certificate and the verifier is a deterministic polynomial-time machine that checks it. It is complete because the right proof string will make it accept if there is one, and it is sound because the verifier cannot accept if there is no acceptable proof string. A major result of complexity theory is that NP can be characterized as the problems solvable by probabilistically checkable proofs where the verifier uses O(log n) random bits and examines only a constant number of bits of the proof string (the class PCP(log n, 1)). More informally, this means that the NP verifier described above can be replaced with one that just "spot-checks" a few places in the proof string, and using a limited number of coin flips can determine the correct answer with high probability. This allows several results about the hardness of approximation algorithms to be proven. Examples P All problems in P, denoted . Given a certificate for a problem in P, we can ignore the certificate and just solve the problem in polynomial time. Integer factorization The decision problem version of the integer factorization problem: given integers n and k, is there a factor f with 1 < f < k and f dividing n? NP-complete problems Every NP-complete problem is in NP. Boolean satisfiability The Boolean satisfiability problem (SAT), where we want to know whether or not a certain formula in propositional logic with Boolean variables is true for some value of the variables. Travelling salesman The decision version of the travelling salesman problem is in NP. Given an input matrix of distances between n cities, the problem is to determine if there is a route visiting all cities with total distance less than k. A proof can simply be a list of the cities. Then verification can clearly be done in polynomial time. It simply adds the matrix entries corresponding to the paths between the cities. A nondeterministic Turing machine can find such a route as follows: At each city it visits it will "guess" the next city to visit, until it has visited every vertex. If it gets stuck, it stops immediately. At the end it verifies that the route it has taken has cost less than k in O(n) time. One can think of each guess as "forking" a new copy of the Turing machine to follow each of the possible paths forward, and if at least one machine finds a route of distance less than k, that machine accepts the input. (Equivalently, this can be thought of as a single Turing machine that always guesses correctly) A binary search on the range of possible distances can convert the decision version of Traveling Salesman to the optimization version, by calling the decision version repeatedly (a polynomial number of times). Subgraph isomorphism The subgraph isomorphism problem of determining whether graph contains a subgraph that is isomorphic to graph .
Mathematics
Complexity theory
null
21626
https://en.wikipedia.org/wiki/Near-Earth%20object
Near-Earth object
A near-Earth object (NEO) is any small Solar System body orbiting the Sun whose closest approach to the Sun (perihelion) is less than 1.3 times the Earth–Sun distance (astronomical unit, AU). This definition applies to the object's orbit around the Sun, rather than its current position, thus an object with such an orbit is considered an NEO even at times when it is far from making a close approach of Earth. If an NEO's orbit crosses the Earth's orbit, and the object is larger than across, it is considered a potentially hazardous object (PHO). Most known PHOs and NEOs are asteroids, but about a third of a percent are comets. There are over 37,000 known near-Earth asteroids (NEAs) and over 120 known short-period near-Earth comets (NECs). A number of solar-orbiting meteoroids were large enough to be tracked in space before striking Earth. It is now widely accepted that collisions in the past have had a significant role in shaping the geological and biological history of Earth. Asteroids as small as in diameter can cause significant damage to the local environment and human populations. Larger asteroids penetrate the atmosphere to the surface of the Earth, producing craters if they impact a continent or tsunamis if they impact the sea. Interest in NEOs has increased since the 1980s because of greater awareness of this risk. Asteroid impact avoidance by deflection is possible in principle, and methods of mitigation are being researched. Two scales, the simple Torino scale and the more complex Palermo scale, rate the risk presented by an identified NEO based on the probability of it impacting the Earth and on how severe the consequences of such an impact would be. Some NEOs have had temporarily positive Torino or Palermo scale ratings after their discovery. Since 1998, the United States, the European Union, and other nations have been scanning the sky for NEOs in an effort called Spaceguard. The initial US Congress mandate to NASA to catalog at least 90% of NEOs that are at least in diameter, sufficient to cause a global catastrophe, was met by 2011. In later years, the survey effort was expanded to include smaller objects which have the potential for large-scale, though not global, damage. NEOs have low surface gravity, and many have Earth-like orbits that make them easy targets for spacecraft. , five near-Earth comets and six near-Earth asteroids, one of them with a moon, have been visited by spacecraft. Samples of three have been returned to Earth, and one successful deflection test was conducted. Similar missions are in progress. Preliminary plans for commercial asteroid mining have been drafted by private startup companies, but few of these plans were pursued. Definitions Near-Earth objects (NEOs) are formally defined by the International Astronomical Union (IAU) as all small Solar System bodies with orbits around the Sun that are at least partially closer than 1.3 astronomical units (AU; Sun–Earth distance) from the Sun. This definition excludes larger bodies such as planets, like Venus; natural satellites which orbit bodies other than the Sun, like Earth's Moon; and artificial bodies orbiting the Sun. A small Solar System body can be an asteroid or a comet, thus an NEO is either a near-Earth asteroid (NEA) or a near-Earth comet (NEC). The organisations cataloging NEOs further limit their definition of NEO to objects with an orbital period under 200 years, a restriction that applies to comets in particular, but this approach is not universal. Some authors further restrict the definition to orbits that are at least partly further than 0.983 AU away from the Sun. NEOs are thus not necessarily currently near the Earth, but they can potentially approach the Earth relatively closely. Many NEOs have complex orbits due to constant perturbation by the Earth's gravity, and some of them can temporarily change from an orbit around the Sun to one around the Earth, but the term is applied flexibly for these objects, too. The orbits of some NEOs intersect that of the Earth, so they pose a collision danger. These are considered potentially hazardous objects (PHOs) if their estimated diameter is above 140 meters. PHOs include potentially hazardous asteroids (PHAs). PHAs are defined based on two parameters relating to respectively their potential to approach the Earth dangerously closely and the estimated consequences that an impact would have if it occurs. Objects with both an Earth minimum orbit intersection distance (MOID) of 0.05 AU or less and an absolute magnitude of 22.0 or brighter (a rough indicator of large size) are considered PHAs. Objects that either cannot approach closer to the Earth than , or which are fainter than H = 22.0 (about in diameter with assumed albedo of 14%), are not considered PHAs. History of human awareness of NEOs The first near-Earth objects to be observed by humans were comets. Their extraterrestrial nature was recognised and confirmed only after Tycho Brahe tried to measure the distance of a comet through its parallax in 1577 and the lower limit he obtained was well above the Earth diameter; the periodicity of some comets was first recognised in 1705, when Edmond Halley published his orbit calculations for the returning object now known as Halley's Comet. The 1758–1759 return of Halley's Comet was the first comet appearance predicted. The extraterrestrial origin of meteors (shooting stars) was only recognised on the basis of the analysis of the 1833 Leonid meteor shower by astronomer Denison Olmsted. The 33-year period of the Leonids led astronomers to suspect that they originate from a comet that would today be classified as an NEO, which was confirmed in 1867, when astronomers found that the newly discovered comet 55P/Tempel–Tuttle has the same orbit as the Leonids. The first near-Earth asteroid to be discovered was 433 Eros in 1898. The asteroid was subject to several extensive observation campaigns, primarily because measurements of its orbit enabled a precise determination of the then imperfectly known distance of the Earth from the Sun. Encounters with Earth If a near-Earth object is near the part of its orbit closest to Earth's at the same time Earth is at the part of its orbit closest to the near-Earth object's orbit, the object has a close approach, or, if the orbits intersect, could even impact the Earth or its atmosphere. Close approaches , only 23 comets have been observed to pass within of Earth, including 10 which are or have been short-period comets. Two of these near-Earth comets, Halley's Comet and 73P/Schwassmann–Wachmann, have been observed during multiple close approaches. The closest observed approach was 0.0151 AU (5.88 LD) for Lexell's Comet on July 1, 1770. After an orbit change due to a close approach of Jupiter in 1779, this object is no longer an NEC. The closest approach ever observed for a current short-period NEC is 0.0229 AU (8.92 LD) for Comet Tempel–Tuttle in 1366. Orbital calculations show that P/1999 J6 (SOHO), a faint sungrazing comet and confirmed short-period NEC observed only during its close approaches to the Sun, passed Earth undetected at a distance of 0.0120 AU (4.65 LD) on June 12, 1999. In 1937, asteroid 69230 Hermes was discovered when it passed the Earth at twice the distance of the Moon. On June 14, 1968, the diameter asteroid 1566 Icarus passed Earth at a distance of , or 16.5 times the distance of the Moon. During this approach, Icarus became the first minor planet to be observed using radar. This was the first close approach predicted years in advance, since Icarus had been discovered in 1949. The first near-Earth asteroid known to have passed Earth closer than the distance of the Moon was , a body which passed at a distance of . By the 2010s, each year, several mostly small NEOs were observed passing Earth closer than the distance of the Moon. As astronomers became able to discover ever smaller and fainter and ever more numerous near-Earth objects, they began to routinely observe and catalogue close approaches. , the closest approach without atmospheric or ground impact ever detected was an encounter with asteroid on November 14, 2020. The NEA was detected receding from Earth; calculations showed that on the day before, it had a close approach at about from the Earth's centre, or about above its surface. On November 8, 2011, asteroid , relatively large at about in diameter, passed within (0.845 lunar distances) of Earth. On February 15, 2013, the asteroid 367943 Duende () passed approximately above the surface of Earth, closer than satellites in geosynchronous orbit. The asteroid was not visible to the unaided eye. This was the first sub-lunar close passage of an object discovered during a previous passage, and was thus the first to be predicted well in advance. Earth-grazers Some small asteroids that enter the upper atmosphere of Earth at a shallow angle remain intact and leave the atmosphere again, continuing on a solar orbit. During the passage through the atmosphere, due to the burning of its surface, such an object can be observed as an Earth-grazing fireball. On August 10, 1972, a meteor that became known as the 1972 Great Daylight Fireball was witnessed by many people and even filmed as it moved north over the Rocky Mountains from the U.S. Southwest to Canada. It passed within of the Earth's surface. On October 13, 1990, Earth-grazing meteoroid EN131090 was observed above Czechoslovakia and Poland, moving at along a trajectory from south to north. The closest approach to the Earth was above the surface. It was captured by two all-sky cameras of the European Fireball Network, which for the first time enabled geometric calculations of the orbit of such a body. Impacts When a near-Earth object impacts Earth, objects up to a few tens of metres across ordinarily explode in the upper atmosphere (usually harmlessly), with most or all of the solids vaporized and only small amounts of meteorites arriving to the Earth surface. Larger objects, by contrast, hit the water surface, forming tsunami waves, or the solid surface, forming impact craters. The frequency of impacts of objects of various sizes is estimated on the basis of orbit simulations of NEO populations, the frequency of impact craters on the Earth and the Moon, and the frequency of close encounters. The study of impact craters indicates that impact frequency has been more or less steady for the past 3.5 billion years, which requires a steady replenishment of the NEO population from the asteroid main belt. One impact model based on widely accepted NEO population models estimates the average time between the impact of two stony asteroids with a diameter of at least at about one year; for asteroids across (which impacts with as much energy as the atomic bomb dropped on Hiroshima, approximately 15 kilotonnes of TNT) at five years, for asteroids across (an impact energy of 10 megatons, comparable to the Tunguska event in 1908) at 1,300 years, for asteroids across at 440 thousand years, and for asteroids across at 18 million years. Some other models estimate similar impact frequencies, while others calculate higher frequencies. For Tunguska-sized (10 megaton) impacts, the estimates range from one event every 2,000–3,000 years to one event every 300 years. The second-largest observed event after the Tunguska meteor was a 1.1 megaton air blast in 1963 near the Prince Edward Islands between South Africa and Antarctica. However, this event was detected only by infrasound sensors, which led to speculation that this may have been a nuclear test. The third-largest, but by far best-observed impact, was the Chelyabinsk meteor of 15 February 2013. A previously unknown asteroid exploded above this Russian city with an equivalent blast yield of 400–500 kilotons. The calculated orbit of the pre-impact asteroid is similar to that of Apollo asteroid , making the latter the meteor's possible parent body. On October 7, 2008, 20 hours after it was first observed and 11 hours after its trajectory has been calculated and announced, asteroid blew up above the Nubian Desert in Sudan. It was the first time that an asteroid was observed and its impact was predicted prior to its entry into the atmosphere as a meteor. 10.7 kg of meteorites were recovered after the impact. , eleven impacts have been predicted, all of them small bodies that produced meteor explosions, with some impacts in remote areas only detected by the Comprehensive Nuclear-Test-Ban Treaty Organization's International Monitoring System (IMS), a network of infrasound sensors designed to detect the detonation of nuclear devices. Asteroid impact prediction remains in its infancy and successfully predicted asteroid impacts are rare. The vast majority of impacts recorded by IMS are not predicted. Observed impacts aren't restricted to the surface and atmosphere of Earth. Dust-sized NEOs have impacted man-made spacecraft, including the space probe Long Duration Exposure Facility, which collected interplanetary dust in low Earth orbit for six years from 1984. Impacts on the Moon can be observed as flashes of light with a typical duration of a fraction of a second. The first lunar impacts were recorded during the 1999 Leonid storm. Subsequently, several continuous monitoring programs were launched. A lunar impact that was observed on September 11, 2013, lasted 8 seconds, was likely caused by an object in diameter, and created a new crater across, was the largest ever observed . Risk Through human history, the risk that any near-Earth object poses has been viewed having regard to both the culture and the technology of human society. Through history, humans have associated NEOs with changing risks, based on religious, philosophical or scientific views, as well as humanity's technological or economical capability to deal with such risks. Thus, NEOs have been seen as omens of natural disasters or wars; harmless spectacles in an unchanging universe; the source of era-changing cataclysms or potentially poisonous fumes (during Earth's passage through the tail of Halley's Comet in 1910); and finally as a possible cause of a crater-forming impact that could even cause extinction of humans and other life on Earth. The potential of catastrophic impacts by near-Earth comets was recognised as soon as the first orbit calculations provided an understanding of their orbits: in 1694, Edmond Halley presented a theory that Noah's flood in the Bible was caused by a comet impact. Human perception of near-Earth asteroids as benign objects of fascination or killer objects with high risk to human society has ebbed and flowed during the short time that NEAs have been scientifically observed. The 1937 close approach of Hermes and the 1968 close approach of Icarus first raised impact concerns among scientists. Icarus earned significant public attention due to alarmist news reports. while Hermes was considered a threat because it was lost after its discovery; thus its orbit and potential for collision with Earth were not known precisely. Hermes was only re-discovered in 2003, and it is now known to be no threat for at least the next century. Scientists have recognised the threat of impacts that create craters much bigger than the impacting bodies and have indirect effects on an even wider area since the 1980s, with mounting evidence for the theory that the Cretaceous–Paleogene extinction event (in which the non-avian dinosaurs died out) 65 million years ago was caused by a large asteroid impact. On March 23, 1989, the diameter Apollo asteroid 4581 Asclepius (1989 FC) missed the Earth by . If the asteroid had impacted it would have created the largest explosion in recorded history, equivalent to 20,000 megatons of TNT. It attracted widespread attention because it was discovered only after the closest approach. From the 1990s, a typical frame of reference in searches for NEOs has been the scientific concept of risk. The awareness of the wider public of the impact risk rose after the observation of the impact of the fragments of Comet Shoemaker–Levy 9 into Jupiter in July 1994. In March 1998, early orbit calculations for recently discovered asteroid showed a potential 2028 close approach from the Earth, well within the orbit of the Moon, but with a large error margin allowing for a direct hit. Further data allowed a revision of the 2028 approach distance to , with no chance of collision. By that time, inaccurate reports of a potential impact had caused a media storm. In 1998, the movies Deep Impact and Armageddon popularised the notion that near-Earth objects could cause catastrophic impacts. Also at that time, a conspiracy theory arose about a supposed 2003 impact of a planet called Nibiru with Earth, which persisted on the internet as the predicted impact date was moved to 2012 and then 2017. Risk scales There are two schemes for the scientific classification of impact hazards from NEOs, as a way to communicate the risk of impacts to the general public. The simple Torino scale was established at an IAU workshop in Torino in June 1999, in the wake of the public confusion about the impact risk of . It rates the risks of impacts in the next 100 years according to impact energy and impact probability, using integer numbers between 0 and 10: ratings of 0 and 1 are of no concern to astronomers or the public, ratings of 2 to 4 are used for events with increasing magnitude of concern to astronomers trying to make more precise orbit calculations, but not yet a concern for the public, ratings of 5 to 7 are meant for impacts of increasing magnitude which are not certain but warrant public concern and governmental contingency planning, 8 to 10 would be used for certain collisions of increasing severity. The more complex Palermo Technical Impact Hazard Scale, established in 2002, compares the likelihood of an impact at a certain date to the probable number of impacts of a similar energy or greater until the possible impact, and takes the logarithm of this ratio. Thus, a Palermo scale rating can be any positive or negative real number, and risks of any concern are indicated by values above zero. Unlike the Torino scale, the Palermo scale is not sensitive to newly discovered small objects with an orbit known with low confidence. Highly rated risks The National Aeronautics and Space Administration NASA maintains an automated system to evaluate the threat from known NEOs over the next 100 years, which generates the continuously updated Sentry Risk Table. All or nearly all of the objects are highly likely to drop off the list eventually as more observations come in, reducing the uncertainties and enabling more accurate orbital predictions. A similar table is maintained on NEODyS (Near Earth Objects Dynamic Site) by the European Space Agency (ESA). In March 2002, became the first asteroid with a temporarily positive rating on the Torino Scale, with about a 1 in 9,300 chance of an impact in 2049. Additional observations reduced the estimated risk to zero, and the asteroid was removed from the Sentry Risk Table in April 2002. It is now known that within the next two centuries, will pass the Earth at a safe closest distance (perigee) of on August 31, 2080. Asteroid was lost after its 1950 discovery, since its observations over just 17 days were insufficient to precisely determine its orbit. It was rediscovered in December 2000 prior to a close approach the next year, when new observations, including radar imaging, allowed much more precise orbit calculations. It has a diameter of about a kilometer (0.6 miles), and an impact would therefore be globally catastrophic. Although this asteroid will not strike for at least 800 years and thus has no Torino scale rating, it was added to the Sentry list in April 2002 as the first object with a Palermo scale value greater than zero. The then-calculated 1 in 300 maximum chance of impact and +0.17 Palermo scale value was roughly 50% greater than the background risk of impact by all similarly large objects until 2880. After additional radar and optical observations, , the probability of this impact is assessed at 1 in 2,600. The corresponding Palermo scale value of −0.93 is the highest for all objects on the Sentry List Table. On December 24, 2004, asteroid 99942 Apophis (at the time yet unnamed and therefore known only by its provisional designation ) was assigned a 4 on the Torino scale, the highest rating given to date, as the information available at the time translated to a 1.6% chance of Earth impact in April 2029. As observations were collected over the next three days, the calculated chance of impact first increased to as high as 2.7%, then fell back to zero, as the shrinking uncertainty zone for this close approach no longer included the Earth. There was at that time still some uncertainty about potential impacts during later close approaches. However, as the precision of orbital calculations improved due to additional observations, the risk of impact at any date was completely eliminated and Apophis was removed from the Sentry Risk Table in February 2021. In February 2006, , having a diameter around 300 metres, was assigned a Torino Scale rating of 2 due to a close encounter predicted for May 4, 2102. After additional observations allowed increasingly precise predictions, the Torino rating was lowered first to 1 in May 2006, then to 0 in October 2006, and the asteroid was removed from the Sentry Risk Table entirely in February 2008. In 2021, was listed with the highest chance of impacting Earth, at 1 in 22 on September 5, 2095. At only across, the asteroid however is much too small to be considered a potentially hazardous asteroid and it poses no serious threat: the possible 2095 impact therefore rated only −3.32 on the Palermo Scale. Observations during the August 2022 close approach were expected to ascertain whether the asteroid will impact or miss Earth in 2095. , the risk of the 2095 impact was put at 1 in 10, still the highest, with a Palermo Scale rating of −2.97. Projects to minimize the threat A year before the 1968 close approach of asteroid Icarus, Massachusetts Institute of Technology students launched Project Icarus, devising a plan to deflect the asteroid with rockets in case it was found to be on a collision course with Earth. Project Icarus received wide media coverage, and inspired the 1979 disaster movie Meteor, in which the US and the USSR join forces to blow up an Earth-bound fragment of an asteroid hit by a comet. The first astronomical program dedicated to the discovery of near-Earth asteroids was the Palomar Planet-Crossing Asteroid Survey. The link to impact hazard, the need for dedicated survey telescopes and options to head off an eventual impact were first discussed at a 1981 interdisciplinary conference in Snowmass, Colorado. Plans for a more comprehensive survey, named the Spaceguard Survey, were developed by NASA from 1992, under a mandate from the United States Congress. To promote the survey on an international level, the International Astronomical Union (IAU) organised a workshop at Vulcano, Italy in 1995, and set up The Spaceguard Foundation also in Italy a year later. In 1998, the United States Congress gave NASA a mandate to detect 90% of near-Earth asteroids over diameter (that threaten global devastation) by 2008. Several surveys have undertaken "Spaceguard" activities (an umbrella term), including Lincoln Near-Earth Asteroid Research (LINEAR), Spacewatch, Near-Earth Asteroid Tracking (NEAT), Lowell Observatory Near-Earth-Object Search (LONEOS), Catalina Sky Survey (CSS), Campo Imperatore Near-Earth Object Survey (CINEOS), Japanese Spaceguard Association, Asiago-DLR Asteroid Survey (ADAS) and Near-Earth Object WISE (NEOWISE). As a result, the ratio of the known and the estimated total number of near-Earth asteroids larger than 1 km in diameter rose from about 20% in 1998 to 65% in 2004, 80% in 2006, and 93% in 2011. The original Spaceguard goal has thus been met, only three years late. , 867 NEAs larger than 1 km have been discovered. In 2005, the original USA Spaceguard mandate was extended by the George E. Brown, Jr. Near-Earth Object Survey Act, which calls for NASA to detect 90% of NEOs with diameters of or greater, by 2020. In September 2020, it was estimated that about half of these have been found, but objects of this size hit the Earth only about once in 30,000 years. In December 2023, using a lower absolute brightness estimate for smaller asteroids, the ratio of discovered NEOs with diameters of or greater was estimated at 38%. The Chile-based Vera C. Rubin Observatory, which will survey the southern sky for transient events from 2025, is expected to increase the number of known asteroids by a factor of 10 to 100 and increase the ratio of known NEOs with diameters of or greater to at least 60%, while the NEO Surveyor satellite, to be launched in 2027, is expected to push the ratio to 76% during its 5-year mission. In January 2016, NASA announced the creation of the Planetary Defense Coordination Office (PDCO) to track NEOs larger than about in diameter and coordinate an effective threat response and mitigation effort. Survey programs aim to identify threats years in advance, giving humanity time to prepare a space mission to avert the threat. The ATLAS project, by contrast, aims to find impacting asteroids shortly before impact, much too late for deflection maneuvers but still in time to evacuate and otherwise prepare the affected Earth region. Another project, the Zwicky Transient Facility (ZTF), which surveys for objects that change their brightness rapidly, also detects asteroids passing close to Earth. Scientists involved in NEO research have also considered options for actively averting the threat if an object is found to be on a collision course with Earth. All viable methods aim to deflect rather than destroy the threatening NEO, because the fragments would still cause widespread destruction. Deflection, which means a change in the object's orbit months to years prior to the predicted impact, also requires orders of magnitude less energy. Number and classification When an NEO is detected, like all other small Solar System bodies, its positions and brightness are submitted to the (IAU's) Minor Planet Center (MPC) for cataloging. The MPC maintains separate lists of confirmed NEOs and potential NEOs. The MPC maintains a separate list for the potentially hazardous asteroids (PHAs). NEOs are also catalogued by two separate units of the Jet Propulsion Laboratory (JPL) of NASA: the Center for Near Earth Object Studies (CNEOS) and the Solar System Dynamics Group. CNEOS's catalog of near-Earth objects includes the approach distances of asteroids and comets. NEOs are also catalogued by a unit of ESA, the Near-Earth Objects Coordination Centre (NEOCC). Near-Earth objects are classified as meteoroids, asteroids, or comets depending on size, composition, and orbit. Those which are asteroids can additionally be members of an asteroid family, and comets create meteoroid streams that can generate meteor showers. and according to statistics maintained by CNEOS, 37,378 NEOs have been discovered. Only 123 (0.33%) of them are comets, whilst 37,255 (99.67%) are asteroids. 2,465 of those NEOs are classified as potentially hazardous asteroids (PHAs). , 1,872 NEAs appear on the Sentry impact risk page at the NASA website. All but 106 of these NEAs are less than 50 meters in diameter and only one recently discovered object is placed even in the "green zone" (Torino Scale 1), meaning that none warrant the attention of the general public or even the special attention of astronomers. Observational biases The main problem with estimating the number of NEOs is that the probability of detecting one is influenced by a number of aspects of the NEO, starting naturally with its size but also including the characteristics of its orbit and the reflectivity of its surface. What is easily detected will be more counted, and these observational biases need to be compensated when trying to calculate the number of bodies in a population from the list of its detected members. Bigger asteroids reflect more light, and the two biggest near-Earth objects, 433 Eros and 1036 Ganymed, were naturally also among the first to be detected. 1036 Ganymed is about in diameter and 433 Eros is about in diameter. Meanwhile, the apparent brightness of objects that are closer is higher, introducing a bias that favours the discovery of NEOs of a given size that get closer to Earth. Earth-based astronomy requires dark skies and hence nighttime observations, and even space-based telescopes avoid looking into directions close to the Sun, thus most NEO surveys are blind towards objects passing Earth on the side of the Sun. This bias is further enhanced by the effect of phase: the narrower the angle of the asteroid and the Sun from the observer, the lesser part of the observed side of the asteroid will be illuminated. Another bias results from the different surface brightness or albedo of the objects, which can make a large but low-albedo object as bright as a small but high-albedo object. In addition, the reflexivity of asteroid surfaces is not uniform but increases towards the direction opposite of illumination, resulting in the phenomenon of phase darkening, which makes asteroids even brighter when the Earth is close to the axis of sunlight. An asteroid's observed albedo usually has a strong peak or opposition surge very close to the direction opposite of the Sun. Different surfaces display different levels of phase darkening, and research showed that, on top of albedo bias, this favours the discovery of silicon-rich S-type asteroids over carbon-rich C types, for example. As a result of these observational biases, in Earth-based surveys, NEOs tended to be discovered when they were in opposition, that is, opposite from the Sun when viewed from the Earth. The most practical way around many of these biases is to use thermal infrared telescopes in space that observe their thermal emissions instead of the visible light they reflect, with a sensitivity that is almost independent of the illumination. In addition, space-based telescopes in an orbit around the Sun in the shadow of the Earth can make observations as close as 45 degrees to the direction of the Sun. Further observational biases favour objects that have more frequent encounters with the Earth, which makes the detection of Atens more likely than that of Apollos; and objects that move slower when encountering the Earth, which makes the detection of NEAs with low eccentricities more likely. Such observational biases must be identified and quantified to determine NEO populations, as studies of asteroid populations then take those known observational selection biases into account to make a more accurate assessment. In the year 2000 and taking into account all known observational biases, it was estimated that there are approximately 900 near-Earth asteroids of at least kilometer size, or technically and more accurately, with an absolute magnitude brighter than 17.75. Near-Earth asteroids These are asteroids in a near-Earth orbit without the tail or coma of a comet. , 37,255 near-Earth asteroids (NEAs) are known, 2,465 of which are both sufficiently large and may come sufficiently close to Earth to be classified as potentially hazardous. NEAs survive in their orbits for just a few million years. They are eventually eliminated by planetary perturbations, causing ejection from the Solar System or a collision with the Sun, a planet, or other celestial body. With orbital lifetimes short compared to the age of the Solar System, new asteroids must be constantly moved into near-Earth orbits to explain the observed asteroids. The accepted origin of these asteroids is that main-belt asteroids are moved into the inner Solar System through orbital resonances with Jupiter. The interaction with Jupiter through the resonance perturbs the asteroid's orbit and it comes into the inner Solar System. The asteroid belt has gaps, known as Kirkwood gaps, where these resonances occur as the asteroids in these resonances have been moved onto other orbits. New asteroids migrate into these resonances, due to the Yarkovsky effect that provides a continuing supply of near-Earth asteroids. Compared to the entire mass of the asteroid belt, the mass loss necessary to sustain the NEA population is relatively small; totalling less than 6% over the past 3.5 billion years. The composition of near-Earth asteroids is comparable to that of asteroids from the asteroid belt, reflecting a variety of asteroid spectral types. A small number of NEAs are extinct comets that have lost their volatile surface materials, although having a faint or intermittent comet-like tail does not necessarily result in a classification as a near-Earth comet, making the boundaries somewhat fuzzy. The rest of the near-Earth asteroids are driven out of the asteroid belt by gravitational interactions with Jupiter. Many asteroids have natural satellites (minor-planet moons). , 104 NEAs were known to have at least one moon, including five known to have two moons. The asteroid 3122 Florence, one of the largest PHAs with a diameter of , has two moons measuring across, which were discovered by radar imaging during the asteroid's 2017 approach to Earth. In May 2022, an algorithm known as Tracklet-less Heliocentric Orbit Recovery or THOR and developed by University of Washington researchers to discover asteroids in the solar system was announced as a success. The International Astronomical Union's Minor Planet Center confirmed a series of first candidate asteroids identified by the algorithm. Size distribution While the size of a very small fraction of these asteroids is known to better than 1%, from radar observations, from images of the asteroid surface, or from stellar occultations, the diameter of the vast majority of near-Earth asteroids has only been estimated on the basis of their brightness and a representative asteroid surface reflectivity or albedo, which is commonly assumed to be 14%. Such indirect size estimates are uncertain by over a factor of 2 for individual asteroids, since asteroid albedos can range at least as low as 5% and as high as 30%. This makes the volume of those asteroids uncertain by a factor of 8, and their mass by at least as much, since their assumed density also has its own uncertainty. Using this crude method, an absolute magnitude of 17.75 roughly corresponds to a diameter of and an absolute magnitude of 22.0 to a diameter of . Diameters of intermediate precision, better than from an assumed albedo but not nearly as precise as good direct measurements, can be obtained from the combination of reflected light and thermal infrared emission, using a thermal model of the asteroid to estimate both its diameter and its albedo. The reliability of this method, as applied by the Wide-field Infrared Survey Explorer and NEOWISE missions, has been the subject of a dispute between experts, with the 2018 publication of two independent analyses, one criticising and another giving results consistent with the WISE method. A 2023 study re-evaluated the relationship of brightness, albedo and diameter. For many objects with a diameter larger than 1 km, brightness estimates were reduced slightly. Meanwhile, based on new albedo estimates of smaller objects, the study found that best corresponds to a diameter of 140 m. In 2000, NASA reduced from 1,000–2,000 to 500–1,000 its estimate of the number of existing near-Earth asteroids over one kilometer in diameter, or more exactly brighter than an absolute magnitude of 17.75. Shortly thereafter, the LINEAR survey provided an alternative estimate of . In 2011, on the basis of NEOWISE observations, the estimated number of one-kilometer NEAs was narrowed to (of which 93% had been discovered at the time), while the number of NEAs larger than 140 meters across was estimated at . The NEOWISE estimate differed from other estimates primarily in assuming a slightly lower average asteroid albedo, which produces larger estimated diameters for the same asteroid brightness. This resulted in 911 then known asteroids at least 1 km across, as opposed to the 830 then listed by CNEOS from the same inputs but assuming a slightly higher albedo. In 2017, two studies using an improved statistical method reduced the estimated number of NEAs brighter than absolute magnitude 17.75 (approximately over one kilometer in diameter) slightly to . The estimated number of near-Earth asteroids brighter than absolute magnitude of 22.0 (approximately over 140 m across) rose to , double the WISE estimate, of which about a fourth were known at the time. The number of asteroids brighter than , which corresponds to about in diameter, is estimated at —of which about 1.3 percent had been discovered by February 2016; the number of asteroids brighter than (larger than ) is estimated at million—of which about 0.003 percent had been discovered by February 2016. A September 2021 study revised the estimated number of NEAs with a diameter larger than 1 km (using both WISE data and the absolute brightness lower than 17.75 as proxy) slightly upwards to , of which 911 were discovered at the time, but reduced the estimated number of asteroids brighter than absolute magnitude of 22.0 (as proxy for a diameter of 140 m) to under 20,000, of which about half were discovered at the time. The 2023 study that re-evaluated the relationship of average absolute brightness, albedo and diameter confirmed the ratios of the number of discovered and estimated total asteroids of different sizes in the 2021 study, but by changing the proxy for a diameter of 140 m to , it estimated that only about 44% of the estimated 35,000 total larger than that have been discovered by the end of 2022. , NEO catalogues still use as proxy for a diameter of 140 m. , and using diameters mostly estimated crudely from a measured absolute magnitude and an assumed albedo, 867 NEAs listed by CNEOS, including 152 PHAs, measure at least 1 km in diameter, and 11,167 known NEAs, including 2,465 PHAs, are larger than 140 m in diameter. The smallest known near-Earth asteroid is with an absolute magnitude of 34.34, corresponding to an estimated diameter of about . The largest such object is 1036 Ganymed, with an absolute magnitude of 9.18 and directly measured irregular dimensions which are equivalent to a diameter of about . Orbital classification Near-Earth asteroids are divided into groups based on their semi-major axis (a), perihelion distance (q), and aphelion distance (Q): The Atiras or Apoheles have orbits strictly inside Earth's orbit: an Atira asteroid's aphelion distance (Q) is smaller than Earth's perihelion distance (0.983 AU). That is, , which implies that the asteroid's semi-major axis is also less than 0.983 AU. This group includes asteroids on orbits that never get close to Earth, including the sub-group of ꞌAylóꞌchaxnims, which orbit the Sun entirely within the orbit of Venus and which include the hypothetical sub-group of Vulcanoids, which have orbits entirely within the orbit of Mercury. The Atens have a semi-major axis of less than 1 AU and cross Earth's orbit. Mathematically, and . (0.983 AU is Earth's perihelion distance.) The Apollos have a semi-major axis of more than 1 AU and cross Earth's orbit. Mathematically, and . (1.017 AU is Earth's aphelion distance.) The Amors have orbits strictly outside Earth's orbit: an Amor asteroid's perihelion distance (q) is greater than Earth's aphelion distance (1.017 AU). Amor asteroids are also near-Earth objects so . In summary, . (This implies that the asteroid's semi-major axis (a) is also larger than 1.017 AU.) Some Amor asteroid orbits cross the orbit of Mars. Some authors define Atens differently: they define it as being all the asteroids with a semi-major axis of less than 1 AU. That is, they consider the Atiras to be part of the Atens. Historically, until 1998, there were no known or suspected Atiras, so the distinction wasn't necessary. Atiras and Amors do not cross the Earth's orbit and are not immediate impact threats, but their orbits may change to become Earth-crossing orbits in the future. , 34 Atiras, 2,952 Atens, 21,132 Apollos and 13,137 Amors have been discovered and cataloged. Co-orbital asteroids Most NEAs have orbits that are significantly more eccentric than that of the Earth and the other major planets and their orbital planes can tilt several degrees relative to that of the Earth. NEAs which have orbits that do resemble the Earth's in eccentricity, inclination and semi-major axis are grouped as Arjuna asteroids. Within this group are NEAs that have the same orbital period as the Earth, or a co-orbital configuration, which corresponds to an orbital resonance at a ratio of 1:1. All co-orbital asteroids have special orbits that are relatively stable and, paradoxically, can prevent them from getting close to Earth: Trojans: Near the orbit of a planet, there are five gravitational equilibrium points, the Lagrangian points, in which an asteroid would orbit the Sun in fixed formation with the planet. Two of these, 60 degrees ahead and behind the planet along its orbit (designated L4 and L5 respectively) are stable; that is, an asteroid near these points would stay there for thousands or even millions of years in spite of light perturbations by other planets and by non-gravitational forces. Trojans circle around L4 or L5 on paths resembing a tadpole. , Earth has two confirmed Trojans: and , both circling Earth's L4 point. Horseshoe librators: The region of stability around L4 and L5 also includes orbits for co-orbital asteroids that run around both L4 and L5. Relative to the Earth and Sun, the orbit can resemble the circumference of a horseshoe, or may consist of annual loops that wander back and forth (librate) in a horseshoe-shaped area. In both cases, the Sun is at the horseshoe's center of gravity, Earth is in the gap of the horseshoe, and L4 and L5 are inside the ends of the horseshoe. Among Earth's known co-orbitals, those with the most stable orbits as well as those with the least stable orbits are horseshoe librators. , at least 13 horseshoe librators of Earth have been discovered. The most-studied and, at about , largest is 3753 Cruithne, which travels along bean-shaped annual loops and completes its horseshoe libration cycle every 770–780 years. is an asteroid on a relatively stable circumference-of-a-horseshoe orbit, with a horseshoe libration period of about 350 years. Quasi-satellites: Quasi-satellites are co-orbital asteroids on a normal elliptic orbit with a higher eccentricity than Earth's, which they travel in a way synchronised with Earth's motion. Since the asteroid orbits the Sun slower than Earth when further away and faster than Earth when closer to the Sun, when observed in a rotating frame of reference fixed to the Sun and the Earth, the quasi-satellite appears to orbit Earth in a retrograde direction in one year, even though it is not bound gravitationally. , six asteroids were known to be a quasi-satellite of Earth. 469219 Kamoʻoalewa is Earth's closest quasi-satellite, in an orbit that has been stable for almost a century. This asteroid is thought to be a piece of the Moon ejected during an impact. Orbit calculations show that almost all quasi-satellites and many horseshoe librators repeatedly transfer between horseshoe and quasi-satellite orbits. One of these objects, , was observed during its transition from a quasi-satellite orbit to a horseshoe orbit in 2006; it is expected to transfer back to a quasi-satellite orbit sometime around year 2066. A quasi-satellite discovered in 2023 but then found in old photographs back to 2012, , was found to have an orbit that is stable for about 4,000 years, from 100 BC to AD 3700. Asteroids on compound orbits: orbital calculations show that some co-orbital asteroids transit between horseshoe and quasi-satellite orbits during every horseshoe resp. quasi-satellite cycle. Theoretically, similar continuous transitions between Trojan and horseshoe orbits are possible, too. , at least 20 Earth co-orbital NEAs are thought to be in the horseshoe-like phase of compound orbits. Temporary satellites: NEAs can also transfer between solar orbits and distant Earth orbits, becoming gravitationally bound temporary satellites. According to simulations, temporary satellites are typically caught when they pass Earth's L1 or L2 Lagrangian points at the time Earth is either at the point in its orbit closest or farthest from the Sun, complete a couple of orbits around Earth, and then return to a heliocentric orbit due to perturbations from the Moon. Strictly speaking, temporary satellites aren't co-orbital asteroids, and they can have orbits of the broader Arjuna type before and after capture by Earth, but simulations show that they can be captured from, or transfer to, horseshoe orbits. The simulations also indicate that Earth typically has at least one temporary satellite across at any given time, but they are too faint to be detected by current surveys. , five temporary satellites have been observed: , , , and . Calculations for the asteroid showed repeated transitions into temporary satellite orbits both in the past and the future 10,000 years. Near-Earth asteroids also include the co-orbitals of Venus. , all known co-orbitals of Venus have orbits with high eccentricity, also crossing Earth's orbit. Meteoroids In 1961, the IAU defined meteoroids as a class of solid interplanetary objects distinct from asteroids by their considerably smaller size. This definition was useful at the time because, with the exception of the Tunguska event, all historically observed meteors were produced by objects significantly smaller than the smallest asteroids then observable by telescopes. As the distinction began to blur with the discovery of ever smaller asteroids and a greater variety of observed NEO impacts, revised definitions with size limits have been proposed from the 1990s. In April 2017, the IAU adopted a revised definition that generally limits meteoroids to a size between 30 μm and 1 m in diameter, but permits the use of the term for any object of any size that caused a meteor, thus leaving the distinction between asteroid and meteoroid blurred. Near-Earth comets Near-Earth comets (NECs) are objects in a near-Earth orbit with a tail or coma made up of dust, gas or ionized particles emitted by a solid nucleus. Comet nuclei are typically less dense than asteroids but they pass Earth at higher relative speeds, thus the impact energy of a comet nucleus is slightly larger than that of a similar-sized asteroid. NECs may pose an additional hazard due to fragmentation: the meteoroid streams which produce meteor showers may include large inactive fragments, effectively NEAs. Although no impact of a comet in Earth's history has been conclusively confirmed, the Tunguska event may have been caused by a fragment of Comet Encke. Comets are commonly divided between short-period and long-period comets. Short-period comets, with an orbital period of less than 200 years, originate in the Kuiper belt, beyond the orbit of Neptune; while long-period comets originate in the Oort Cloud, in the outer reaches of the Solar System. The orbital period distinction is of importance in the evaluation of the risk from near-Earth comets because short-period NECs are likely to have been observed during multiple apparitions and thus their orbits can be determined with some precision, while long-period NECs can be assumed to have been seen for the first and last time when they appeared since the start of precise observations, thus their approaches cannot be predicted well in advance. Since the threat from long-period NECs is estimated to be at most 1% of the threat from NEAs, and long-period comets are very faint and thus difficult to detect at large distances from the Sun, Spaceguard efforts have consistently focused on asteroids and short-period comets. Both NASA's CNEOS and ESA's NEOCC restrict their definition of NECs to short-period comets. , 123 such objects have been discovered. Comet 109P/Swift–Tuttle, which is also the source of the Perseid meteor shower every year in August, has a roughly 130-year orbit that passes close to the Earth. During the comet's September 1992 recovery, when only the two previous returns in 1862 and 1737 had been identified, calculations showed that the comet would pass close to Earth during its next return in 2126, with an impact within the range of uncertainty. By 1993, even earlier returns (back to at least 188 AD) had been identified, and the longer observation arc eliminated the impact risk. The comet will pass Earth in 2126 at a distance of 23 million kilometers. In 3044, the comet is expected to pass Earth at less than 1.6 million kilometers. Artificial near-Earth objects Defunct space probes and final stages of rockets can end up in near-Earth orbits around the Sun. Examples of such artificial near-Earth objects include a Tesla Roadster used as dummy payload in a 2018 rocket test and the Kepler space telescope. Some of these objects have been re-discovered by NEO surveys when they returned to Earth's vicinity and classified as asteroids before their artificial origin was recognised. An object classified as asteroid 1991 VG was discovered during its transition from a temporary satellite orbit around Earth to a solar orbit in November 1991, and could only be observed until April 1992. Some scientists suspected it to be a returning piece of man-made space debris. After new observations in 2017 provided better data on its orbit and surface characteristics, a new study found the artificial origin unlikely. In September 2002, astronomers found an object designated J002E3. The object was on a temporary satellite orbit around Earth, leaving for a solar orbit in June 2003. Calculations showed that it was also on a solar orbit before 2002, but was close to Earth in 1971. J002E3 was identified as the third stage of the Saturn V rocket that carried Apollo 12 to the Moon. In 2006, two more apparent temporary satellites were discovered which were suspected of being artificial. One of them was eventually confirmed as an asteroid and classified as the temporary satellite . The other, 6Q0B44E, was confirmed as an artificial object, but its identity is unknown. Another temporary satellite was discovered in 2013, and was designated as a suspected asteroid. It was later found to be an artificial object of unknown origin. is no longer listed as an asteroid by the Minor Planet Center. In September 2020, an object detected on an orbit very similar to that of the Earth was temporarily designated . However, orbital calculations and spectral observations confirmed that the object was the Centaur rocket booster of the 1966 Surveyor 2 uncrewed lunar lander. In some cases, active space probes on solar orbits have been observed by NEO surveys and erroneously catalogued as asteroids before identification. During its 2007 flyby of Earth on its route to a comet, ESA's space probe Rosetta was detected unidentified and classified as asteroid , with an alert issued due to its close approach. The designation was similarly removed from asteroid catalogues when the observed object was identified with Gaia, ESA's space observatory for astrometry. Exploratory missions Some NEOs are of special interest because the sum total of changes in orbital speed required to send a spacecraft on a mission to physically explore an NEO – and thus the amount of rocket fuel required for the mission – is lower than what is necessary for even lunar missions, due to their combination of low velocity with respect to Earth and weak gravity. They may present interesting scientific opportunities both for direct geochemical and astronomical investigation, and as potentially economical sources of extraterrestrial materials for human exploitation. This makes them an attractive target for exploration. Missions to NEAs The IAU held a minor planets workshop in Tucson, Arizona, in March 1971. At that point, launching a spacecraft to asteroids was considered premature; the workshop only inspired the first astronomical survey specifically aiming for NEAs. Missions to asteroids were considered again during a workshop at the University of Chicago held by NASA's Office of Space Science in January 1978. Of all of the near-Earth asteroids (NEA) that had been discovered by mid-1977, it was estimated that spacecraft could rendezvous with and return from only about 1 in 10 using less propulsive energy than is necessary to reach Mars. It was recognised that due to the low surface gravity of all NEAs, moving around on the surface of an NEA would cost very little energy, and thus space probes could gather multiple samples. Overall, it was estimated that about one percent of all NEAs might provide opportunities for human-crewed missions, or no more than about ten NEAs known at the time. A five-fold increase in the NEA discovery rate was deemed necessary to make a crewed mission within ten years worthwhile. The first near-Earth asteroid to be visited by a spacecraft was 433 Eros when NASA's NEAR Shoemaker probe orbited it from February 2000, landing on the surface of the asteroid in February 2001. A second NEA, the long peanut-shaped 25143 Itokawa, was explored from September 2005 to April 2007 by JAXA's Hayabusa mission, which succeeded in taking material samples back to Earth. A third NEA, the long elongated 4179 Toutatis, was explored by CNSA's Chang'e 2 spacecraft during a flyby in December 2012. The Apollo asteroid 162173 Ryugu was explored from June 2018 until November 2019 by JAXA's Hayabusa2 space probe, which returned a sample to Earth. A second sample-return mission, NASA's OSIRIS-REx probe, targeted the Apollo asteroid 101955 Bennu, which, , has the second-highest cumulative Palermo scale rating (−1.40 for several close encounters between 2178 and 2290). On its journey to Bennu, the probe had searched unsuccessfully for Earth's Trojan asteroids, entered into orbit around Bennu in December 2018, touched down on its surface in October 2020, and was successful in returning samples to Earth three years later. China plans to launch its own sample-return mission, Tianwen-2, in May 2025, targeting Earth quasi-satellite and returning samples to Earth in late 2027. After completing its mission to Bennu, the probe OSIRIS-REx was redirected towards 99942 Apophis, which it is planned to orbit from April 2029. After completing its exploration of 162173 Ryugu, the mission of the Hayabusa2 space probe was extended, to include flybys of S-type Apollo asteroid 98943 Torifune in July 2026 and fast-rotating Apollo asteroid in July 2031. In 2025, JAXA plans to launch another probe, DESTINY+, to explore Apollo asteroid , the parent body of the Geminid meteor shower, during a flyby. Asteroid deflection tests On September 26, 2022, NASA's DART spacecraft reached the system of and impacted the Apollo asteroid's moon Dimorphos, in a test of a method of planetary defense against near-Earth objects. In addition to telescopes on or in orbit around the Earth, the impact was observed by the Italian mini-spacecraft or CubeSat LICIACube, which separated from DART 15 days before impact. The impact shortened the orbital period of Dimorphos around Didymos by 33 minutes, indicating that the moon's momentum change was 3.6 times the momentum of the impacting spacecraft, thus most of the change was due to the ejected material of the moon itself. In October 2024, ESA launched the spacecraft Hera, which is to enter orbit around Didymos in December 2026, to study the consequences of the DART impact. China plans to launch its own pair of asteroid deflection and observation probes in 2027, which are to target Aten asteroid . Space mining From the 2000s, there were plans for the commercial exploitation of near-Earth asteroids, either through the use of robots or even by sending private commercial astronauts to act as space miners, but few of these plans were pursued. In April 2012, the company Planetary Resources announced its plans to mine asteroids commercially. In a first phase, the company reviewed data and selected potential targets among NEAs. In a second phase, space probes would be sent to the selected NEAs; mining spacecraft would be sent in a third phase. Planetary Resources launched two testbed satellites in April 2015 and January 2018, and the first prospecting satellite for the second phase was planned for a 2020 launch prior to the company closing and its assets purchased by ConsenSys Space in 2018. Another American company established with the goal of space mining, AstroForge, plans to launch the probe Odin (formerly Brokkr-2) at the end of February 2025, with the goal of performing a flyby of an as yet undisclosed asteroid to confirm if it is a metal-rich M-type asteroid, and then follow it up later in 2025 with the probe Vestri, which is to land on the same asteroid. Missions to NECs The first near-Earth comet visited by a space probe was 21P/Giacobini–Zinner in 1985, when the NASA/ESA probe International Cometary Explorer (ICE) passed through its coma. In March 1986, ICE, along with Soviet probes Vega 1 and Vega 2, ISAS probes Sakigake and Suisei and ESA probe Giotto flew by the nucleus of Halley's Comet. In 1992, Giotto also visited another NEC, 26P/Grigg–Skjellerup. In November 2010, after completing its primary mission to non-near-Earth comet Tempel 1, the NASA probe Deep Impact flew by the near-Earth comet 103P/Hartley. In August 2014, ESA probe Rosetta began orbiting near-Earth comet 67P/Churyumov–Gerasimenko, while its lander Philae landed on its surface in November 2014. After the end of its mission, Rosetta was crashed into the comet's surface in 2016.
Physical sciences
Solar System
Astronomy
21652
https://en.wikipedia.org/wiki/Natural%20language%20processing
Natural language processing
Natural language processing (NLP) is a subfield of computer science and especially artificial intelligence. It is primarily concerned with providing computers with the ability to process data encoded in natural language and is thus closely related to information retrieval, knowledge representation and computational linguistics, a subfield of linguistics. Typically data is collected in text corpora, using either rule-based, statistical or neural-based approaches in machine learning and deep learning. Major tasks in natural language processing are speech recognition, text classification, natural-language understanding, and natural-language generation. History Natural language processing has its roots in the 1950s. Already in 1950, Alan Turing published an article titled "Computing Machinery and Intelligence" which proposed what is now called the Turing test as a criterion of intelligence, though at the time that was not articulated as a problem separate from artificial intelligence. The proposed test includes a task that involves the automated interpretation and generation of natural language. Symbolic NLP (1950s – early 1990s) The premise of symbolic NLP is well-summarized by John Searle's Chinese room experiment: Given a collection of rules (e.g., a Chinese phrasebook, with questions and matching answers), the computer emulates natural language understanding (or other NLP tasks) by applying those rules to the data it confronts. 1950s: The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that within three or five years, machine translation would be a solved problem. However, real progress was much slower, and after the ALPAC report in 1966, which found that ten years of research had failed to fulfill the expectations, funding for machine translation was dramatically reduced. Little further research in machine translation was conducted in America (though some research continued elsewhere, such as Japan and Europe) until the late 1980s when the first statistical machine translation systems were developed. 1960s: Some notably successful natural language processing systems developed in the 1960s were SHRDLU, a natural language system working in restricted "blocks worlds" with restricted vocabularies, and ELIZA, a simulation of a Rogerian psychotherapist, written by Joseph Weizenbaum between 1964 and 1966. Using almost no information about human thought or emotion, ELIZA sometimes provided a startlingly human-like interaction. When the "patient" exceeded the very small knowledge base, ELIZA might provide a generic response, for example, responding to "My head hurts" with "Why do you say your head hurts?". Ross Quillian's successful work on natural language was demonstrated with a vocabulary of only twenty words, because that was all that would fit in a computer memory at the time. 1970s: During the 1970s, many programmers began to write "conceptual ontologies", which structured real-world information into computer-understandable data. Examples are MARGIE (Schank, 1975), SAM (Cullingford, 1978), PAM (Wilensky, 1978), TaleSpin (Meehan, 1976), QUALM (Lehnert, 1977), Politics (Carbonell, 1979), and Plot Units (Lehnert 1981). During this time, the first chatterbots were written (e.g., PARRY). 1980s: The 1980s and early 1990s mark the heyday of symbolic methods in NLP. Focus areas of the time included research on rule-based parsing (e.g., the development of HPSG as a computational operationalization of generative grammar), morphology (e.g., two-level morphology), semantics (e.g., Lesk algorithm), reference (e.g., within Centering Theory) and other areas of natural language understanding (e.g., in the Rhetorical Structure Theory). Other lines of research were continued, e.g., the development of chatterbots with Racter and Jabberwacky. An important development (that eventually led to the statistical turn in the 1990s) was the rising importance of quantitative evaluation in this period. Statistical NLP (1990s–2010s) Up until the 1980s, most natural language processing systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in natural language processing with the introduction of machine learning algorithms for language processing. This was due to both the steady increase in computational power (see Moore's law) and the gradual lessening of the dominance of Chomskyan theories of linguistics (e.g. transformational grammar), whose theoretical underpinnings discouraged the sort of corpus linguistics that underlies the machine-learning approach to language processing. 1990s: Many of the notable early successes in statistical methods in NLP occurred in the field of machine translation, due especially to work at IBM Research, such as IBM alignment models. These systems were able to take advantage of existing multilingual textual corpora that had been produced by the Parliament of Canada and the European Union as a result of laws calling for the translation of all governmental proceedings into all official languages of the corresponding systems of government. However, most other systems depended on corpora specifically developed for the tasks implemented by these systems, which was (and often continues to be) a major limitation in the success of these systems. As a result, a great deal of research has gone into methods of more effectively learning from limited amounts of data. 2000s: With the growth of the web, increasing amounts of raw (unannotated) language data have become available since the mid-1990s. Research has thus increasingly focused on unsupervised and semi-supervised learning algorithms. Such algorithms can learn from data that has not been hand-annotated with the desired answers or using a combination of annotated and non-annotated data. Generally, this task is much more difficult than supervised learning, and typically produces less accurate results for a given amount of input data. However, there is an enormous amount of non-annotated data available (including, among other things, the entire content of the World Wide Web), which can often make up for the inferior results if the algorithm used has a low enough time complexity to be practical. Neural NLP (present) In 2003, word n-gram model, at the time the best statistical algorithm, was outperformed by a multi-layer perceptron (with a single hidden layer and context length of several words trained on up to 14 million of words with a CPU cluster in language modelling) by Yoshua Bengio with co-authors. In 2010, Tomáš Mikolov (then a PhD student at Brno University of Technology) with co-authors applied a simple recurrent neural network with a single hidden layer to language modelling, and in the following years he went on to develop Word2vec. In the 2010s, representation learning and deep neural network-style (featuring many hidden layers) machine learning methods became widespread in natural language processing. That popularity was due partly to a flurry of results showing that such techniques can achieve state-of-the-art results in many natural language tasks, e.g., in language modeling and parsing. This is increasingly important in medicine and healthcare, where NLP helps analyze notes and text in electronic health records that would otherwise be inaccessible for study when seeking to improve care or protect patient privacy. Approaches: Symbolic, statistical, neural networks Symbolic approach, i.e., the hand-coding of a set of rules for manipulating symbols, coupled with a dictionary lookup, was historically the first approach used both by AI in general and by NLP in particular: such as by writing grammars or devising heuristic rules for stemming. Machine learning approaches, which include both statistical and neural networks, on the other hand, have many advantages over the symbolic approach: both statistical and neural networks methods can focus more on the most common cases extracted from a corpus of texts, whereas the rule-based approach needs to provide rules for both rare cases and common ones equally. language models, produced by either statistical or neural networks methods, are more robust to both unfamiliar (e.g. containing words or structures that have not been seen before) and erroneous input (e.g. with misspelled words or words accidentally omitted) in comparison to the rule-based systems, which are also more costly to produce. the larger such a (probabilistic) language model is, the more accurate it becomes, in contrast to rule-based systems that can gain accuracy only by increasing the amount and complexity of the rules leading to intractability problems. Although rule-based systems for manipulating symbols were still in use in 2020, they have become mostly obsolete with the advance of LLMs in 2023. Before that they were commonly used: when the amount of training data is insufficient to successfully apply machine learning methods, e.g., for the machine translation of low-resource languages such as provided by the Apertium system, for preprocessing in NLP pipelines, e.g., tokenization, or for postprocessing and transforming the output of NLP pipelines, e.g., for knowledge extraction from syntactic parses. Statistical approach In the late 1980s and mid-1990s, the statistical approach ended a period of AI winter, which was caused by the inefficiencies of the rule-based approaches. The earliest decision trees, producing systems of hard if–then rules, were still very similar to the old rule-based approaches. Only the introduction of hidden Markov models, applied to part-of-speech tagging, announced the end of the old rule-based approach. Neural networks A major drawback of statistical methods is that they require elaborate feature engineering. Since 2015, the statistical approach has been replaced by the neural networks approach, using semantic networks and word embeddings to capture semantic properties of words. Intermediate tasks (e.g., part-of-speech tagging and dependency parsing) are not needed anymore. Neural machine translation, based on then-newly invented sequence-to-sequence transformations, made obsolete the intermediate steps, such as word alignment, previously necessary for statistical machine translation. Common NLP tasks The following is a list of some of the most commonly researched tasks in natural language processing. Some of these tasks have direct real-world applications, while others more commonly serve as subtasks that are used to aid in solving larger tasks. Though natural language processing tasks are closely intertwined, they can be subdivided into categories for convenience. A coarse division is given below. Text and speech processing Optical character recognition (OCR) Given an image representing printed text, determine the corresponding text. Speech recognition Given a sound clip of a person or people speaking, determine the textual representation of the speech. This is the opposite of text to speech and is one of the extremely difficult problems colloquially termed "AI-complete" (see above). In natural speech there are hardly any pauses between successive words, and thus speech segmentation is a necessary subtask of speech recognition (see below). In most spoken languages, the sounds representing successive letters blend into each other in a process termed coarticulation, so the conversion of the analog signal to discrete characters can be a very difficult process. Also, given that words in the same language are spoken by people with different accents, the speech recognition software must be able to recognize the wide variety of input as being identical to each other in terms of its textual equivalent. Speech segmentation Given a sound clip of a person or people speaking, separate it into words. A subtask of speech recognition and typically grouped with it. Text-to-speech Given a text, transform those units and produce a spoken representation. Text-to-speech can be used to aid the visually impaired. Word segmentation (Tokenization) Tokenization is a process used in text analysis that divides text into individual words or word fragments. This technique results in two key components: a word index and tokenized text. The word index is a list that maps unique words to specific numerical identifiers, and the tokenized text replaces each word with its corresponding numerical token. These numerical tokens are then used in various deep learning methods. For a language like English, this is fairly trivial, since words are usually separated by spaces. However, some written languages like Chinese, Japanese and Thai do not mark word boundaries in such a fashion, and in those languages text segmentation is a significant task requiring knowledge of the vocabulary and morphology of words in the language. Sometimes this process is also used in cases like bag of words (BOW) creation in data mining. Morphological analysis Lemmatization The task of removing inflectional endings only and to return the base dictionary form of a word which is also known as a lemma. Lemmatization is another technique for reducing words to their normalized form. But in this case, the transformation actually uses a dictionary to map words to their actual form. Morphological segmentation Separate words into individual morphemes and identify the class of the morphemes. The difficulty of this task depends greatly on the complexity of the morphology (i.e., the structure of words) of the language being considered. English has fairly simple morphology, especially inflectional morphology, and thus it is often possible to ignore this task entirely and simply model all possible forms of a word (e.g., "open, opens, opened, opening") as separate words. In languages such as Turkish or Meitei, a highly agglutinated Indian language, however, such an approach is not possible, as each dictionary entry has thousands of possible word forms. Part-of-speech tagging Given a sentence, determine the part of speech (POS) for each word. Many words, especially common ones, can serve as multiple parts of speech. For example, "book" can be a noun ("the book on the table") or verb ("to book a flight"); "set" can be a noun, verb or adjective; and "out" can be any of at least five different parts of speech. Stemming The process of reducing inflected (or sometimes derived) words to a base form (e.g., "close" will be the root for "closed", "closing", "close", "closer" etc.). Stemming yields similar results as lemmatization, but does so on grounds of rules, not a dictionary. Syntactic analysis Grammar induction Generate a formal grammar that describes a language's syntax. Sentence breaking (also known as "sentence boundary disambiguation") Given a chunk of text, find the sentence boundaries. Sentence boundaries are often marked by periods or other punctuation marks, but these same characters can serve other purposes (e.g., marking abbreviations). Parsing Determine the parse tree (grammatical analysis) of a given sentence. The grammar for natural languages is ambiguous and typical sentences have multiple possible analyses: perhaps surprisingly, for a typical sentence there may be thousands of potential parses (most of which will seem completely nonsensical to a human). There are two primary types of parsing: dependency parsing and constituency parsing. Dependency parsing focuses on the relationships between words in a sentence (marking things like primary objects and predicates), whereas constituency parsing focuses on building out the parse tree using a probabilistic context-free grammar (PCFG) (see also stochastic grammar). Lexical semantics (of individual words in context) Lexical semantics What is the computational meaning of individual words in context? Distributional semantics How can we learn semantic representations from data? Named entity recognition (NER) Given a stream of text, determine which items in the text map to proper names, such as people or places, and what the type of each such name is (e.g. person, location, organization). Although capitalization can aid in recognizing named entities in languages such as English, this information cannot aid in determining the type of named entity, and in any case, is often inaccurate or insufficient. For example, the first letter of a sentence is also capitalized, and named entities often span several words, only some of which are capitalized. Furthermore, many other languages in non-Western scripts (e.g. Chinese or Arabic) do not have any capitalization at all, and even languages with capitalization may not consistently use it to distinguish names. For example, German capitalizes all nouns, regardless of whether they are names, and French and Spanish do not capitalize names that serve as adjectives. Another name for this task is token classification. Sentiment analysis (see also Multimodal sentiment analysis) Sentiment analysis is a computational method used to identify and classify the emotional intent behind text. This technique involves analyzing text to determine whether the expressed sentiment is positive, negative, or neutral. Models for sentiment classification typically utilize inputs such as word n-grams, Term Frequency-Inverse Document Frequency (TF-IDF) features, hand-generated features, or employ deep learning models designed to recognize both long-term and short-term dependencies in text sequences. The applications of sentiment analysis are diverse, extending to tasks such as categorizing customer reviews on various online platforms. Terminology extraction The goal of terminology extraction is to automatically extract relevant terms from a given corpus. Word-sense disambiguation (WSD) Many words have more than one meaning; we have to select the meaning which makes the most sense in context. For this problem, we are typically given a list of words and associated word senses, e.g. from a dictionary or an online resource such as WordNet. Entity linking Many words—typically proper names—refer to named entities; here we have to select the entity (a famous individual, a location, a company, etc.) which is referred to in context. Relational semantics (semantics of individual sentences) Relationship extraction Given a chunk of text, identify the relationships among named entities (e.g. who is married to whom). Semantic parsing Given a piece of text (typically a sentence), produce a formal representation of its semantics, either as a graph (e.g., in AMR parsing) or in accordance with a logical formalism (e.g., in DRT parsing). This challenge typically includes aspects of several more elementary NLP tasks from semantics (e.g., semantic role labelling, word-sense disambiguation) and can be extended to include full-fledged discourse analysis (e.g., discourse analysis, coreference; see Natural language understanding below). Semantic role labelling (see also implicit semantic role labelling below) Given a single sentence, identify and disambiguate semantic predicates (e.g., verbal frames), then identify and classify the frame elements (semantic roles). Discourse (semantics beyond individual sentences) Coreference resolution Given a sentence or larger chunk of text, determine which words ("mentions") refer to the same objects ("entities"). Anaphora resolution is a specific example of this task, and is specifically concerned with matching up pronouns with the nouns or names to which they refer. The more general task of coreference resolution also includes identifying so-called "bridging relationships" involving referring expressions. For example, in a sentence such as "He entered John's house through the front door", "the front door" is a referring expression and the bridging relationship to be identified is the fact that the door being referred to is the front door of John's house (rather than of some other structure that might also be referred to). Discourse analysis This rubric includes several related tasks. One task is discourse parsing, i.e., identifying the discourse structure of a connected text, i.e. the nature of the discourse relationships between sentences (e.g. elaboration, explanation, contrast). Another possible task is recognizing and classifying the speech acts in a chunk of text (e.g. yes–no question, content question, statement, assertion, etc.). Given a single sentence, identify and disambiguate semantic predicates (e.g., verbal frames) and their explicit semantic roles in the current sentence (see Semantic role labelling above). Then, identify semantic roles that are not explicitly realized in the current sentence, classify them into arguments that are explicitly realized elsewhere in the text and those that are not specified, and resolve the former against the local text. A closely related task is zero anaphora resolution, i.e., the extension of coreference resolution to pro-drop languages. Recognizing textual entailment Given two text fragments, determine if one being true entails the other, entails the other's negation, or allows the other to be either true or false. Topic segmentation and recognition Given a chunk of text, separate it into segments each of which is devoted to a topic, and identify the topic of the segment. Argument mining The goal of argument mining is the automatic extraction and identification of argumentative structures from natural language text with the aid of computer programs. Such argumentative structures include the premise, conclusions, the argument scheme and the relationship between the main and subsidiary argument, or the main and counter-argument within discourse. Higher-level NLP applications Automatic summarization (text summarization) Produce a readable summary of a chunk of text. Often used to provide summaries of the text of a known type, such as research papers, articles in the financial section of a newspaper. Grammatical error detection and correction involves a great band-width of problems on all levels of linguistic analysis (phonology/orthography, morphology, syntax, semantics, pragmatics). Grammatical error correction is impactful since it affects hundreds of millions of people that use or acquire English as a second language. It has thus been subject to a number of shared tasks since 2011. As far as orthography, morphology, syntax and certain aspects of semantics are concerned, and due to the development of powerful neural language models such as GPT-2, this can now (2019) be considered a largely solved problem and is being marketed in various commercial applications. Logic translation Translate a text from a natural language into formal logic. Machine translation (MT) Automatically translate text from one human language to another. This is one of the most difficult problems, and is a member of a class of problems colloquially termed "AI-complete", i.e. requiring all of the different types of knowledge that humans possess (grammar, semantics, facts about the real world, etc.) to solve properly. Natural-language understanding (NLU) Convert chunks of text into more formal representations such as first-order logic structures that are easier for computer programs to manipulate. Natural language understanding involves the identification of the intended semantic from the multiple possible semantics which can be derived from a natural language expression which usually takes the form of organized notations of natural language concepts. Introduction and creation of language metamodel and ontology are efficient however empirical solutions. An explicit formalization of natural language semantics without confusions with implicit assumptions such as closed-world assumption (CWA) vs. open-world assumption, or subjective Yes/No vs. objective True/False is expected for the construction of a basis of semantics formalization. Natural-language generation (NLG): Convert information from computer databases or semantic intents into readable human language. Book generation Not an NLP task proper but an extension of natural language generation and other NLP tasks is the creation of full-fledged books. The first machine-generated book was created by a rule-based system in 1984 (Racter, The policeman's beard is half-constructed). The first published work by a neural network was published in 2018, 1 the Road, marketed as a novel, contains sixty million words. Both these systems are basically elaborate but non-sensical (semantics-free) language models. The first machine-generated science book was published in 2019 (Beta Writer, Lithium-Ion Batteries, Springer, Cham). Unlike Racter and 1 the Road, this is grounded on factual knowledge and based on text summarization. Document AI A Document AI platform sits on top of the NLP technology enabling users with no prior experience of artificial intelligence, machine learning or NLP to quickly train a computer to extract the specific data they need from different document types. NLP-powered Document AI enables non-technical teams to quickly access information hidden in documents, for example, lawyers, business analysts and accountants. Dialogue management Computer systems intended to converse with a human. Question answering Given a human-language question, determine its answer. Typical questions have a specific right answer (such as "What is the capital of Canada?"), but sometimes open-ended questions are also considered (such as "What is the meaning of life?"). Text-to-image generation Given a description of an image, generate an image that matches the description. Text-to-scene generation Given a description of a scene, generate a 3D model of the scene. Text-to-video Given a description of a video, generate a video that matches the description. General tendencies and (possible) future directions Based on long-standing trends in the field, it is possible to extrapolate future directions of NLP. As of 2020, three trends among the topics of the long-standing series of CoNLL Shared Tasks can be observed: Interest on increasingly abstract, "cognitive" aspects of natural language (1999–2001: shallow parsing, 2002–03: named entity recognition, 2006–09/2017–18: dependency syntax, 2004–05/2008–09 semantic role labelling, 2011–12 coreference, 2015–16: discourse parsing, 2019: semantic parsing). Increasing interest in multilinguality, and, potentially, multimodality (English since 1999; Spanish, Dutch since 2002; German since 2003; Bulgarian, Danish, Japanese, Portuguese, Slovenian, Swedish, Turkish since 2006; Basque, Catalan, Chinese, Greek, Hungarian, Italian, Turkish since 2007; Czech since 2009; Arabic since 2012; 2017: 40+ languages; 2018: 60+/100+ languages) Elimination of symbolic representations (rule-based over supervised towards weakly supervised methods, representation learning and end-to-end systems) Cognition Most higher-level NLP applications involve aspects that emulate intelligent behaviour and apparent comprehension of natural language. More broadly speaking, the technical operationalization of increasingly advanced aspects of cognitive behaviour represents one of the developmental trajectories of NLP (see trends among CoNLL shared tasks above). Cognition refers to "the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses." Cognitive science is the interdisciplinary, scientific study of the mind and its processes. Cognitive linguistics is an interdisciplinary branch of linguistics, combining knowledge and research from both psychology and linguistics. Especially during the age of symbolic NLP, the area of computational linguistics maintained strong ties with cognitive studies. As an example, George Lakoff offers a methodology to build natural language processing (NLP) algorithms through the perspective of cognitive science, along with the findings of cognitive linguistics, with two defining aspects: Apply the theory of conceptual metaphor, explained by Lakoff as "the understanding of one idea, in terms of another" which provides an idea of the intent of the author. For example, consider the English word big. When used in a comparison ("That is a big tree"), the author's intent is to imply that the tree is physically large relative to other trees or the authors experience. When used metaphorically ("Tomorrow is a big day"), the author's intent to imply importance. The intent behind other usages, like in "She is a big person", will remain somewhat ambiguous to a person and a cognitive NLP algorithm alike without additional information. Assign relative measures of meaning to a word, phrase, sentence or piece of text based on the information presented before and after the piece of text being analyzed, e.g., by means of a probabilistic context-free grammar (PCFG). The mathematical equation for such algorithms is presented in US Patent 9269353: Where RMM is the relative measure of meaning token is any block of text, sentence, phrase or word N is the number of tokens being analyzed PMM is the probable measure of meaning based on a corpora d is the non zero location of the token along the sequence of N tokens PF is the probability function specific to a language Ties with cognitive linguistics are part of the historical heritage of NLP, but they have been less frequently addressed since the statistical turn during the 1990s. Nevertheless, approaches to develop cognitive models towards technically operationalizable frameworks have been pursued in the context of various frameworks, e.g., of cognitive grammar, functional grammar, construction grammar, computational psycholinguistics and cognitive neuroscience (e.g., ACT-R), however, with limited uptake in mainstream NLP (as measured by presence on major conferences of the ACL). More recently, ideas of cognitive NLP have been revived as an approach to achieve explainability, e.g., under the notion of "cognitive AI". Likewise, ideas of cognitive NLP are inherent to neural models multimodal NLP (although rarely made explicit) and developments in artificial intelligence, specifically tools and technologies using large language model approaches and new directions in artificial general intelligence based on the free energy principle by British neuroscientist and theoretician at University College London Karl J. Friston.
Technology
Artificial intelligence concepts
null
21655
https://en.wikipedia.org/wiki/Nitric%20acid
Nitric acid
Nitric acid is an inorganic compound with the formula . It is a highly corrosive mineral acid. The compound is colorless, but samples tend to acquire a yellow cast over time due to decomposition into oxides of nitrogen. Most commercially available nitric acid has a concentration of 68% in water. When the solution contains more than 86% , it is referred to as fuming nitric acid. Depending on the amount of nitrogen dioxide present, fuming nitric acid is further characterized as red fuming nitric acid at concentrations above 86%, or white fuming nitric acid at concentrations above 95%. Nitric acid is the primary reagent used for nitration – the addition of a nitro group, typically to an organic molecule. While some resulting nitro compounds are shock- and thermally-sensitive explosives, a few are stable enough to be used in munitions and demolition, while others are still more stable and used as synthetic dyes and medicines (e.g. metronidazole). Nitric acid is also commonly used as a strong oxidizing agent. History Medieval alchemy The discovery of mineral acids such as nitric acid is generally believed to go back to 13th-century European alchemy. The conventional view is that nitric acid was first described in pseudo-Geber's De inventione veritatis ("On the Discovery of Truth", after ). However, according to Eric John Holmyard and Ahmad Y. al-Hassan, the nitric acid also occurs in various earlier Arabic works such as the ("Chest of Wisdom") attributed to Jabir ibn Hayyan (8th century) or the attributed to the Fatimid caliph al-Hakim bi-Amr Allah (985–1021). The recipe in the attributed to Jabir has been translated as follows:Take five parts of pure flowers of nitre, three parts of Cyprus vitriol and two parts of Yemen alum. Powder them well, separately, until they are like dust and then place them in a flask. Plug the latter with a palm fibre and attach a glass receiver to it. Then invert the apparatus and heat the upper portion (i.e. the flask containing the mixture) with a gentle fire. There will flow down by reason of the heat an oil like cow's butter. Nitric acid is also found in post-1300 works falsely attributed to Albert the Great and Ramon Llull (both 13th century). These works describe the distillation of a mixture containing niter and green vitriol, which they call "eau forte" (aqua fortis). Modern era In the 17th century, Johann Rudolf Glauber devised a process to obtain nitric acid by distilling potassium nitrate with sulfuric acid. In 1776 Antoine Lavoisier cited Joseph Priestley's work to point out that it can be converted from nitric oxide (which he calls "nitrous air"), "combined with an approximately equal volume of the purest part of common air, and with a considerable quantity of water." In 1785 Henry Cavendish determined its precise composition and showed that it could be synthesized by passing a stream of electric sparks through moist air. In 1806, Humphry Davy reported the results of extensive distilled water electrolysis experiments concluding that nitric acid was produced at the anode from dissolved atmospheric nitrogen gas. He used a high voltage battery and non-reactive electrodes and vessels such as gold electrode cones that doubled as vessels bridged by damp asbestos. The industrial production of nitric acid from atmospheric air began in 1905 with the Birkeland–Eyde process, also known as the arc process. This process is based upon the oxidation of atmospheric nitrogen by atmospheric oxygen to nitric oxide with a very high temperature electric arc. Yields of up to approximately 4–5% nitric oxide were obtained at 3000 °C, and less at lower temperatures. The nitric oxide was cooled and oxidized by the remaining atmospheric oxygen to nitrogen dioxide, and this was subsequently absorbed in water in a series of packed column or plate column absorption towers to produce dilute nitric acid. The first towers bubbled the nitrogen dioxide through water and non-reactive quartz fragments. About 20% of the produced oxides of nitrogen remained unreacted so the final towers contained an alkali solution to neutralize the rest. The process was very energy intensive and was rapidly displaced by the Ostwald process once cheap ammonia became available. Another early production method was invented by French engineer Albert Nodon around 1913. His method produced nitric acid from electrolysis of calcium nitrate converted by bacteria from nitrogenous matter in peat bogs. An earthenware pot surrounded by limestone was sunk into the peat and staked with tarred lumber to make a compartment for the carbon anode around which the nitric acid is formed. Nitric acid was pumped out from an earthenware pipe that was sunk down to the bottom of the pot. Fresh water was pumped into the top through another earthenware pipe to replace the fluid removed. The interior was filled with coke. Cast iron cathodes were sunk into the peat surrounding it. Resistance was about 3 ohms per cubic meter and the power supplied was around 10 volts. Production from one deposit was 800 tons per year. Once the Haber process for the efficient production of ammonia was introduced in 1913, nitric acid production from ammonia using the Ostwald process overtook production from the Birkeland–Eyde process. This method of production is still in use today. Physical and chemical properties Commercially available nitric acid is an azeotrope with water at a concentration of 68% . This solution has a boiling temperature of 120.5 °C (249 °F) at 1 atm. It is known as "concentrated nitric acid". The azeotrope of nitric acid and water is a colourless liquid at room temperature. Two solid hydrates are known: the monohydrate or oxonium nitrate and the trihydrate . An older density scale is occasionally seen, with concentrated nitric acid specified as 42 Baumé. Contamination with nitrogen dioxide Nitric acid is subject to thermal or light decomposition and for this reason it was often stored in brown glass bottles: This reaction may give rise to some non-negligible variations in the vapor pressure above the liquid because the nitrogen oxides produced dissolve partly or completely in the acid. The nitrogen dioxide () and/or dinitrogen tetroxide () remains dissolved in the nitric acid coloring it yellow or even red at higher temperatures. While the pure acid tends to give off white fumes when exposed to air, acid with dissolved nitrogen dioxide gives off reddish-brown vapors, leading to the common names "red fuming nitric acid" and "white fuming nitric acid". Nitrogen oxides () are soluble in nitric acid. Fuming nitric acid Commercial-grade fuming nitric acid contains 98% and has a density of 1.50 g/cm3. This grade is often used in the explosives industry. It is not as volatile nor as corrosive as the anhydrous acid and has the approximate concentration of 21.4 M. Red fuming nitric acid, or RFNA, contains substantial quantities of dissolved nitrogen dioxide () leaving the solution with a reddish-brown color. Due to the dissolved nitrogen dioxide, the density of red fuming nitric acid is lower at 1.490 g/cm3. An inhibited fuming nitric acid, either white inhibited fuming nitric acid (IWFNA), or red inhibited fuming nitric acid (IRFNA), can be made by the addition of 0.6 to 0.7% hydrogen fluoride (HF). This fluoride is added for corrosion resistance in metal tanks. The fluoride creates a metal fluoride layer that protects the metal. Anhydrous nitric acid White fuming nitric acid, pure nitric acid or WFNA, is very close to anhydrous nitric acid. It is available as 99.9% nitric acid by assay, or about 24 molar. One specification for white fuming nitric acid is that it has a maximum of 2% water and a maximum of 0.5% dissolved . Anhydrous nitric acid is a colorless, low-viscosity (mobile) liquid with a density of 1.512–3 g/cm3 that solidifies at to form white crystals. Its dynamic viscosity under standard conditions is 0.76 cP. As it decomposes to and water, it obtains a yellow tint. It boils at . It is usually stored in a glass shatterproof amber bottle with twice the volume of head space to allow for pressure build up, but even with those precautions the bottle must be vented monthly to release pressure. Structure and bonding The two terminal N–O bonds are nearly equivalent and relatively short, at 1.20 and 1.21 Å. This can be explained by theories of resonance; the two major canonical forms show some double bond character in these two bonds, causing them to be shorter than N–O single bonds. The third N–O bond is elongated because its O atom is bonded to H atom, with a bond length of 1.41 Å in the gas phase. The molecule is slightly aplanar (the and NOH planes are tilted away from each other by 2°) and there is restricted rotation about the N–OH single bond. Reactions Acid-base properties Nitric acid is normally considered to be a strong acid at ambient temperatures. There is some disagreement over the value of the acid dissociation constant, though the pKa value is usually reported as less than −1. This means that the nitric acid in diluted solution is fully dissociated except in extremely acidic solutions. The pKa value rises to 1 at a temperature of 250 °C. Nitric acid can act as a base with respect to an acid such as sulfuric acid: ;Equilibrium constant: K ≈ 22 The nitronium ion, , is the active reagent in aromatic nitration reactions. Since nitric acid has both acidic and basic properties, it can undergo an autoprotolysis reaction, similar to the self-ionization of water: Reactions with metals Nitric acid reacts with most metals, but the details depend on the concentration of the acid and the nature of the metal. Dilute nitric acid behaves as a typical acid in its reaction with most metals. Magnesium, manganese, and zinc liberate : Nitric acid can oxidize non-active metals such as copper and silver. With these non-active or less electropositive metals the products depend on temperature and the acid concentration. For example, copper reacts with dilute nitric acid at ambient temperatures with a 3:8 stoichiometry: The nitric oxide produced may react with atmospheric oxygen to give nitrogen dioxide. With more concentrated nitric acid, nitrogen dioxide is produced directly in a reaction with 1:4 stoichiometry: Upon reaction with nitric acid, most metals give the corresponding nitrates. Some metalloids and metals give the oxides; for instance, Sn, As, Sb, and Ti are oxidized into , , , and respectively. Some precious metals, such as pure gold and platinum-group metals do not react with nitric acid, though pure gold does react with aqua regia, a mixture of concentrated nitric acid and hydrochloric acid. However, some less noble metals (Ag, Cu, ...) present in some gold alloys relatively poor in gold such as colored gold can be easily oxidized and dissolved by nitric acid, leading to colour changes of the gold-alloy surface. Nitric acid is used as a cheap means in jewelry shops to quickly spot low-gold alloys (< 14 karats) and to rapidly assess the gold purity. Being a powerful oxidizing agent, nitric acid reacts with many non-metallic compounds, sometimes explosively. Depending on the acid concentration, temperature and the reducing agent involved, the end products can be variable. Reaction takes place with all metals except the noble metals series and certain alloys. As a general rule, oxidizing reactions occur primarily with the concentrated acid, favoring the formation of nitrogen dioxide (). However, the powerful oxidizing properties of nitric acid are thermodynamic in nature, but sometimes its oxidation reactions are rather kinetically non-favored. The presence of small amounts of nitrous acid () greatly increases the rate of reaction. Although chromium (Cr), iron (Fe), and aluminium (Al) readily dissolve in dilute nitric acid, the concentrated acid forms a metal-oxide layer that protects the bulk of the metal from further oxidation. The formation of this protective layer is called passivation. Typical passivation concentrations range from 20% to 50% by volume. Metals that are passivated by concentrated nitric acid are iron, cobalt, chromium, nickel, and aluminium. Reactions with non-metals Being a powerful oxidizing acid, nitric acid reacts with many organic materials, and the reactions may be explosive. The hydroxyl group will typically strip a hydrogen from the organic molecule to form water, and the remaining nitro group takes the hydrogen's place. Nitration of organic compounds with nitric acid is the primary method of synthesis of many common explosives, such as nitroglycerin and trinitrotoluene (TNT). As very many less stable byproducts are possible, these reactions must be carefully thermally controlled, and the byproducts removed to isolate the desired product. Reaction with non-metallic elements, with the exceptions of nitrogen, oxygen, noble gases, silicon, and halogens other than iodine, usually oxidizes them to their highest oxidation states as acids with the formation of nitrogen dioxide for concentrated acid and nitric oxide for dilute acid. Concentrated nitric acid oxidizes , , and into , , and , respectively. Although it reacts with graphite and amorphous carbon, it does not react with diamond; it can separate diamond from the graphite that it oxidizes. Xanthoproteic test Nitric acid reacts with proteins to form yellow nitrated products. This reaction is known as the xanthoproteic reaction. This test is carried out by adding concentrated nitric acid to the substance being tested, and then heating the mixture. If proteins that contain amino acids with aromatic rings are present, the mixture turns yellow. Upon adding a base such as ammonia, the color turns orange. These color changes are caused by nitrated aromatic rings in the protein. Xanthoproteic acid is formed when the acid contacts epithelial cells. Respective local skin color changes are indicative of inadequate safety precautions when handling nitric acid. Production Industrial nitric acid production uses the Ostwald process. The combined Ostwald and Haber processes are extremely efficient, requiring only air and natural gas feedstocks. The Ostwald process' technical innovation is the proper conditions under which anhydrous ammonia burns to nitric oxide (NO) instead of dinitrogen (). The nitric oxide is then oxidized, often with atmospheric oxygen, to nitrogen dioxide (): The dioxide then disproportionates in water to nitric acid and the nitric oxide feedstock: The net reaction is maximal oxidation of ammonia: Dissolved nitrogen oxides are either stripped (in the case of white fuming nitric acid) or remain in solution to form red fuming nitric acid. Commercial grade nitric acid solutions are usually between 52% and 68% nitric acid by mass, the maximum distillable concentration. Further dehydration to 98% can be achieved with concentrated . Historically, higher acid concentrations were also produced by dissolving additional nitrogen dioxide in the acid, but the last plant in the United States ceased using that process in 2012. More recently, electrochemical means have been developed to produce anhydrous acid from concentrated nitric acid feedstock. Laboratory synthesis Laboratory-scale nitric acid syntheses abound. Most take inspiration from the industrial techniques. A wide variety of nitrate salts metathesize with sulfuric acid () — for example, sodium nitrate: Distillation at nitric acid's 83 °C boiling point then separates the solid metal-salt residue. The resulting acid solution is the 68.5% azeotrope, and can be further concentrated (as in industry) with either sulfuric acid or magnesium nitrate. Alternatively, thermal decomposition of copper(II) nitrate gives nitrogen dioxide and oxygen gases; these are then passed through water or hydrogen peroxide as in the Ostwald process: or Uses The main industrial use of nitric acid is for the production of fertilizers. Nitric acid is neutralized with ammonia to give ammonium nitrate. This application consumes 75–80% of the 26 million tonnes produced annually (1987). The other main applications are for the production of explosives, nylon precursors, and specialty organic compounds. Precursor to organic nitrogen compounds In organic synthesis, industrial and otherwise, the nitro group is a versatile functional group. A mixture of nitric and sulfuric acids introduces a nitro substituent onto various aromatic compounds by electrophilic aromatic substitution. Many explosives, such as TNT, are prepared this way: Either concentrated sulfuric acid or oleum absorbs the excess water. The nitro group can be reduced to give an amine group, allowing synthesis of aniline compounds from various nitrobenzenes: Use as an oxidant The precursor to nylon, adipic acid, is produced on a large scale by oxidation of "KA oil"—a mixture of cyclohexanone and cyclohexanol—with nitric acid. Rocket propellant Nitric acid has been used in various forms as the oxidizer in liquid-fueled rockets. These forms include red fuming nitric acid, white fuming nitric acid, mixtures with sulfuric acid, and these forms with HF inhibitor. IRFNA (inhibited red fuming nitric acid) was one of three liquid fuel components for the BOMARC missile. Niche uses Metal processing Nitric acid can be used to convert metals to oxidized forms, such as converting copper metal to cupric nitrate. It can also be used in combination with hydrochloric acid as aqua regia to dissolve noble metals such as gold (as chloroauric acid). These salts can be used to purify gold and other metals beyond 99.9% purity by processes of recrystallization and selective precipitation. Its ability to dissolve certain metals selectively or be a solvent for many metal salts makes it useful in gold parting processes. Analytical reagent In elemental analysis by ICP-MS, ICP-AES, GFAA, and Flame AA, dilute nitric acid (0.5–5.0%) is used as a matrix compound for determining metal traces in solutions. Ultrapure trace metal grade acid is required for such determination, because small amounts of metal ions could affect the result of the analysis. It is also typically used in the digestion process of turbid water samples, sludge samples, solid samples as well as other types of unique samples which require elemental analysis via ICP-MS, ICP-OES, ICP-AES, GFAA and flame atomic absorption spectroscopy. Typically these digestions use a 50% solution of the purchased mixed with Type 1 DI Water. In electrochemistry, nitric acid is used as a chemical doping agent for organic semiconductors, and in purification processes for raw carbon nanotubes. Woodworking In a low concentration (approximately 10%), nitric acid is often used to artificially age pine and maple. The color produced is a grey-gold very much like very old wax- or oil-finished wood (wood finishing). Etchant and cleaning agent The corrosive effects of nitric acid are exploited for some specialty applications, such as etching in printmaking, pickling stainless steel or cleaning silicon wafers in electronics. A solution of nitric acid, water and alcohol, nital, is used for etching metals to reveal the microstructure. ISO 14104 is one of the standards detailing this well known procedure. Nitric acid is used either in combination with hydrochloric acid or alone to clean glass cover slips and glass slides for high-end microscopy applications. It is also used to clean glass before silvering when making silver mirrors. Commercially available aqueous blends of 5–30% nitric acid and 15–40% phosphoric acid are commonly used for cleaning food and dairy equipment primarily to remove precipitated calcium and magnesium compounds (either deposited from the process stream or resulting from the use of hard water during production and cleaning). The phosphoric acid content helps to passivate ferrous alloys against corrosion by the dilute nitric acid. Nitric acid can be used as a spot test for alkaloids like LSD, giving a variety of colours depending on the alkaloid. Nuclear fuel reprocessing Nitric acid plays a key role in PUREX and other nuclear fuel reprocessing methods, where it can dissolve many different actinides. The resulting nitrates are converted to various complexes that can be reacted and extracted selectively in order to separate the metals from each other. Safety Nitric acid is a corrosive acid and a powerful oxidizing agent. The major hazard posed by it is chemical burns, as it carries out acid hydrolysis with proteins (amide) and fats (ester), which consequently decomposes living tissue (e.g. skin and flesh). Concentrated nitric acid stains human skin yellow due to its reaction with the keratin. These yellow stains turn orange when neutralized. Systemic effects are unlikely, and the substance is not considered a carcinogen or mutagen. The standard first-aid treatment for acid spills on the skin is, as for other corrosive agents, irrigation with large quantities of water. Washing is continued for at least 10–15 minutes to cool the tissue surrounding the acid burn and to prevent secondary damage. Contaminated clothing is removed immediately and the underlying skin washed thoroughly. Being a strong oxidizing agent, nitric acid can react violently with many compounds. Use in acid attacks Nitric acid is one of the most common types of acid used in acid attacks.
Physical sciences
Inorganic compounds
null
21664
https://en.wikipedia.org/wiki/Nebula
Nebula
A nebula (; : nebulae, or nebulas) is a distinct luminescent part of interstellar medium, which can consist of ionized, neutral, or molecular hydrogen and also cosmic dust. Nebulae are often star-forming regions, such as in the Pillars of Creation in the Eagle Nebula. In these regions, the formations of gas, dust, and other materials "clump" together to form denser regions, which attract further matter and eventually become dense enough to form stars. The remaining material is then thought to form planets and other planetary system objects. Most nebulae are of vast size; some are hundreds of light-years in diameter. A nebula that is visible to the human eye from Earth would appear larger, but no brighter, from close by. The Orion Nebula, the brightest nebula in the sky and occupying an area twice the angular diameter of the full Moon, can be viewed with the naked eye but was missed by early astronomers. Although denser than the space surrounding them, most nebulae are far less dense than any vacuum created on Earth (10 to 10 molecules per cubic centimeter) – a nebular cloud the size of the Earth would have a total mass of only a few kilograms. Earth's air has a density of approximately 10 molecules per cubic centimeter; by contrast, the densest nebulae can have densities of 10 molecules per cubic centimeter. Many nebulae are visible due to fluorescence caused by embedded hot stars, while others are so diffused that they can be detected only with long exposures and special filters. Some nebulae are variably illuminated by T Tauri variable stars. Originally, the term "nebula" was used to describe any diffused astronomical object, including galaxies beyond the Milky Way. The Andromeda Galaxy, for instance, was once referred to as the Andromeda Nebula (and spiral galaxies in general as "spiral nebulae") before the true nature of galaxies was confirmed in the early 20th century by Vesto Slipher, Edwin Hubble, and others. Edwin Hubble discovered that most nebulae are associated with stars and illuminated by starlight. He also helped categorize nebulae based on the type of light spectra they produced. Observational history Around 150 AD, Ptolemy recorded, in books VII–VIII of his Almagest, five stars that appeared nebulous. He also noted a region of nebulosity between the constellations Ursa Major and Leo that was not associated with any star. The first true nebula, as distinct from a star cluster, was mentioned by the Muslim Persian astronomer Abd al-Rahman al-Sufi in his Book of Fixed Stars (964). He noted "a little cloud" where the Andromeda Galaxy is located. He also cataloged the Omicron Velorum star cluster as a "nebulous star" and other nebulous objects, such as Brocchi's Cluster. The supernovas that created the Crab Nebula, SN 1054, was observed by Arabic and Chinese astronomers in 1054. In 1610, Nicolas-Claude Fabri de Peiresc discovered the Orion Nebula using a telescope. This nebula was also observed by Johann Baptist Cysat in 1618. However, the first detailed study of the Orion Nebula was not performed until 1659 by Christiaan Huygens, who also believed he was the first person to discover this nebulosity. In 1715, Edmond Halley published a list of six nebulae. This number steadily increased during the century, with Jean-Philippe de Cheseaux compiling a list of 20 (including eight not previously known) in 1746. From 1751 to 1753, Nicolas-Louis de Lacaille cataloged 42 nebulae from the Cape of Good Hope, most of which were previously unknown. Charles Messier then compiled a catalog of 103 "nebulae" (now called Messier objects, which included what are now known to be galaxies) by 1781; his interest was detecting comets, and these were objects that might be mistaken for them. The number of nebulae was then greatly increased by the efforts of William Herschel and his sister, Caroline Herschel. Their Catalogue of One Thousand New Nebulae and Clusters of Stars was published in 1786. A second catalog of a thousand was published in 1789, and the third and final catalog of 510 appeared in 1802. During much of their work, William Herschel believed that these nebulae were merely unresolved clusters of stars. In 1790, however, he discovered a star surrounded by nebulosity and concluded that this was a true nebulosity rather than a more distant cluster. Beginning in 1864, William Huggins examined the spectra of about 70 nebulae. He found that roughly a third of them had the emission spectrum of a gas. The rest showed a continuous spectrum and were thus thought to consist of a mass of stars. A third category was added in 1912 when Vesto Slipher showed that the spectrum of the nebula that surrounded the star Merope matched the spectra of the Pleiades open cluster. Thus, the nebula radiates by reflected star light. In 1923, following the Great Debate, it became clear that many "nebulae" were in fact galaxies far from the Milky Way. Slipher and Edwin Hubble continued to collect the spectra from many different nebulae, finding 29 that showed emission spectra and 33 that had the continuous spectra of star light. In 1922, Hubble announced that nearly all nebulae are associated with stars and that their illumination comes from star light. He also discovered that the emission spectrum nebulae are nearly always associated with stars having spectral classifications of B or hotter (including all O-type main sequence stars), while nebulae with continuous spectra appear with cooler stars. Both Hubble and Henry Norris Russell concluded that the nebulae surrounding the hotter stars are transformed in some manner. Formation There are a variety of formation mechanisms for the different types of nebulae. Some nebulae form from gas that is already in the interstellar medium while others are produced by stars. Examples of the former case are giant molecular clouds, the coldest, densest phase of interstellar gas, which can form by the cooling and condensation of more diffuse gas. Examples of the latter case are planetary nebulae formed from material shed by a star in late stages of its stellar evolution. Star-forming regions are a class of emission nebula associated with giant molecular clouds. These form as a molecular cloud collapses under its own weight, producing stars. Massive stars may form in the center, and their ultraviolet radiation ionizes the surrounding gas, making it visible at optical wavelengths. The region of ionized hydrogen surrounding the massive stars is known as an H II region while the shells of neutral hydrogen surrounding the H II region are known as photodissociation region. Examples of star-forming regions are the Orion Nebula, the Rosette Nebula and the Omega Nebula. Feedback from star-formation, in the form of supernova explosions of massive stars, stellar winds or ultraviolet radiation from massive stars, or outflows from low-mass stars may disrupt the cloud, destroying the nebula after several million years. Other nebulae form as the result of supernova explosions; the death throes of massive, short-lived stars. The materials thrown off from the supernova explosion are then ionized by the energy and the compact object that its core produces. One of the best examples of this is the Crab Nebula, in Taurus. The supernova event was recorded in the year 1054 and is labeled SN 1054. The compact object that was created after the explosion lies in the center of the Crab Nebula and its core is now a neutron star. Still other nebulae form as planetary nebulae. This is the final stage of a low-mass star's life, like Earth's Sun. Stars with a mass up to 8–10 solar masses evolve into red giants and slowly lose their outer layers during pulsations in their atmospheres. When a star has lost enough material, its temperature increases and the ultraviolet radiation it emits can ionize the surrounding nebula that it has thrown off. The Sun will produce a planetary nebula and its core will remain behind in the form of a white dwarf. Types Classical types Objects named nebulae belong to four major groups. Before their nature was understood, galaxies ("spiral nebulae") and star clusters too distant to be resolved as stars were also classified as nebulae, but no longer are. H II regions, large diffuse nebulae containing ionized hydrogen Planetary nebulae Supernova remnants (e.g., Crab Nebula) Dark nebulae Not all cloud-like structures are nebulae; Herbig–Haro objects are an example. Flux Nebulae Diffuse nebulae Most nebulae can be described as diffuse nebulae, which means that they are extended and contain no well-defined boundaries. Diffuse nebulae can be divided into emission nebulae, reflection nebulae and dark nebulae. Visible light nebulae may be divided into emission nebulae, which emit spectral line radiation from excited or ionized gas (mostly ionized hydrogen); they are often called H II regions, H II referring to ionized hydrogen), and reflection nebulae which are visible primarily due to the light they reflect. Reflection nebulae themselves do not emit significant amounts of visible light, but are near stars and reflect light from them. Similar nebulae not illuminated by stars do not exhibit visible radiation, but may be detected as opaque clouds blocking light from luminous objects behind them; they are called dark nebulae. Although these nebulae have different visibility at optical wavelengths, they are all bright sources of infrared emission, chiefly from dust within the nebulae. Planetary nebulae Planetary nebulae are the remnants of the final stages of stellar evolution for mid-mass stars (varying in size between 0.5-~8 solar masses). Evolved asymptotic giant branch stars expel their outer layers outwards due to strong stellar winds, thus forming gaseous shells while leaving behind the star's core in the form of a white dwarf. Radiation from the hot white dwarf excites the expelled gases, producing emission nebulae with spectra similar to those of emission nebulae found in star formation regions. They are H II regions, because mostly hydrogen is ionized, but planetary are denser and more compact than nebulae found in star formation regions. Planetary nebulae were given their name by the first astronomical observers who were initially unable to distinguish them from planets, which were of more interest to them. The Sun is expected to spawn a planetary nebula about 12 billion years after its formation. Protoplanetary nebulae Supernova remnants A supernova occurs when a high-mass star reaches the end of its life. When nuclear fusion in the core of the star stops, the star collapses. The gas falling inward either rebounds or gets so strongly heated that it expands outwards from the core, thus causing the star to explode. The expanding shell of gas forms a supernova remnant, a special diffuse nebula. Although much of the optical and X-ray emission from supernova remnants originates from ionized gas, a great amount of the radio emission is a form of non-thermal emission called synchrotron emission. This emission originates from high-velocity electrons oscillating within magnetic fields. Examples Ant Nebula Barnard's Loop Boomerang Nebula Cat's Eye Nebula Crab Nebula Eagle Nebula Eskimo Nebula Carina Nebula Fox Fur Nebula Helix Nebula Horsehead Nebula Engraved Hourglass Nebula Lagoon Nebula Orion Nebula Pelican Nebula Red Square Nebula Ring Nebula Rosette Nebula Tarantula Nebula Waterfall Nebula Catalogs Gum catalog (emission nebulae) RCW Catalogue (emission nebulae) Sharpless catalog (emission nebulae) Messier Catalogue Caldwell Catalogue Abell Catalog of Planetary Nebulae Barnard Catalogue (dark nebulae) Lynds' Catalogue of Bright Nebulae Lynds' Catalogue of Dark Nebulae
Physical sciences
Basics_3
null
21675
https://en.wikipedia.org/wiki/Natural%20resource
Natural resource
Natural resources are resources that are drawn from nature and used with few modifications. This includes the sources of valued characteristics such as commercial and industrial use, aesthetic value, scientific interest, and cultural value. On Earth, it includes sunlight, atmosphere, water, land, all minerals along with all vegetation, and wildlife. Natural resources are part of humanity's natural heritage or protected in nature reserves. Particular areas (such as the rainforest in Fatu-Hiva) often feature biodiversity and geodiversity in their ecosystems. Natural resources may be classified in different ways. Natural resources are materials and components (something that can be used) found within the environment. Every man-made product is composed of natural resources (at its fundamental level). A natural resource may exist as a separate entity such as freshwater, air, or any living organism such as a fish, or it may be transformed by extractivist industries into an economically useful form that must be processed to obtain the resource such as metal ores, rare-earth elements, petroleum, timber and most forms of energy. Some resources are renewable, which means that they can be used at a certain rate and natural processes will restore them. In contrast, many extractive industries rely heavily on non-renewable resources that can only be extracted once. Natural resource allocations can be at the centre of many economic and political confrontations both within and between countries. This is particularly true during periods of increasing scarcity and shortages (depletion and overconsumption of resources). Resource extraction is also a major source of human rights violations and environmental damage. The Sustainable Development Goals and other international development agendas frequently focus on creating more sustainable resource extraction, with some scholars and researchers focused on creating economic models, such as circular economy, that rely less on resource extraction, and more on reuse, recycling and renewable resources that can be sustainably managed. Classification There are various criteria for classifying natural resources. These include the source of origin, stages of development, renewability and ownership. Origin Biotic: Resources that originate from the biosphere and have life such as flora and fauna, fisheries, livestock, etc. Fossil fuels such as coal and petroleum are also included in this category because they are formed from decayed organic matter. Abiotic: Resources that originate from non-living and inorganic material. These include land, fresh water, air, rare-earth elements, and heavy metals including ores, such as gold, iron, copper, silver, etc. Stage of development Potential resources: Resources that are known to exist, but have not been utilized yet. These may be used in the future. For example, petroleum in sedimentary rocks that, until extracted and put to use, remains a potential resource. Actual resources: Resources that have been surveyed, quantified and qualified, and are currently used in development. These are typically dependent on technology and the level of their feasibility, wood processing for example. Reserves: The part of an actual resource that can be developed profitably in the future. Stocks: Resources that have been surveyed, but cannot be used due to lack of technology, hydrogen vehicles for example. Renewability/exhaustibility Renewable resources: These resources can be replenished naturally. Some of these resources, like solar energy, air, wind, water, etc. are continuously available and their quantities are not noticeably affected by human consumption. Though many renewable resources do not have such a rapid recovery rate, these resources are susceptible to depletion by over-use. Resources from a human use perspective are classified as renewable so long as the rate of replenishment/recovery exceeds that of the rate of consumption. They replenish easily compared to non-renewable resources. Non-renewable resources: These resources are formed over a long geological time period in the environment and cannot be renewed easily. Minerals are the most common resource included in this category. From the human perspective, resources are non-renewable when their rate of consumption exceeds the rate of replenishment/recovery; a good example of this is fossil fuels, which are in this category because their rate of formation is extremely slow (potentially millions of years), meaning they are considered non-renewable. Some resources naturally deplete in amount without human interference, the most notable of these being radio-active elements such as uranium, which naturally decay into heavy metals. Of these, the metallic minerals can be re-used by recycling them, but coal and petroleum cannot be recycled. Ownership Individual resources: Resources owned privately by individuals. These include plots, houses, plantations, pastures, ponds, etc. Community resources: Resources which are accessible to all the members of a community. E.g.: Cemeteries National resources: Resources that belong to the nation. The nation has legal powers to acquire them for public welfare. These also include minerals, forests and wildlife within the political boundaries and Exclusive economic zone. International resources: These resources are regulated by international organizations. E.g.: International waters. Extraction Resource extraction involves any activity that withdraws resources from nature. This can range in scale from the traditional use of preindustrial societies to global industry. Extractive industries are, along with agriculture, the basis of the primary sector of the economy. Extraction produces raw material, which is then processed to add value. Examples of extractive industries are hunting, trapping, mining, oil and gas drilling, and forestry. Natural resources can be a substantial part of a country's wealth; however, a sudden inflow of money caused by a resource extraction boom can create social problems including inflation harming other industries ("Dutch disease") and corruption, leading to inequality and underdevelopment, this is known as the "resource curse". Extractive industries represent a large growing activity in many less-developed countries but the wealth generated does not always lead to sustainable and inclusive growth. People often accuse extractive industry businesses as acting only to maximize short-term value, implying that less-developed countries are vulnerable to powerful corporations. Alternatively, host governments are often assumed to be only maximizing immediate revenue. Researchers argue there are areas of common interest where development goals and business cross. These present opportunities for international governmental agencies to engage with the private sector and host governments through revenue management and expenditure accountability, infrastructure development, employment creation, skills and enterprise development, and impacts on children, especially girls and women. A strong civil society can play an important role in ensuring the effective management of natural resources. Norway can serve as a role model in this regard as it has good institutions and open and dynamic public debate with strong civil society actors that provide an effective checks and balances system for the government's management of extractive industries, such as the Extractive Industries Transparency Initiative (EITI), a global standard for the good governance of oil, gas and mineral resources. It seeks to address the key governance issues in the extractive sectors. However, in countries that do not have a very strong and unified society, meaning that there are dissidents who are not as happy with the government as in Norway's case, natural resources can actually be a factor in whether a civil war starts and how long the war lasts. Depletion In recent years, the depletion of natural resources has become a major focus of governments and organizations such as the United Nations (UN). This is evident in the UN's Agenda 21 Section Two, which outlines the necessary steps for countries to take to sustain their natural resources. The depletion of natural resources is considered a sustainable development issue. The term sustainable development has many interpretations, most notably the Brundtland Commission's 'to ensure that it meets the needs of the present without compromising the ability of future generations to meet their own needs'; however, in broad terms it is balancing the needs of the planet's people and species now and in the future. In regards to natural resources, depletion is of concern for sustainable development as it has the ability to degrade current environments and the potential to impact the needs of future generations. Depletion of natural resources is associated with social inequity. Considering most biodiversity are located in developing countries, depletion of this resource could result in losses of ecosystem services for these countries. Some view this depletion as a major source of social unrest and conflicts in developing nations. At present, there is a particular concern for rainforest regions that hold most of the Earth's biodiversity. According to Nelson, deforestation and degradation affect 8.5% of the world's forests with 30% of the Earth's surface already cropped. If we consider that 80% of people rely on medicines obtained from plants and of the world's prescription medicines have ingredients taken from plants, loss of the world's rainforests could result in a loss of finding more potential life-saving medicines. The depletion of natural resources is caused by 'direct drivers of change' such as mining, petroleum extraction, fishing, and forestry as well as 'indirect drivers of change' such as demography (e.g. population growth), economy, society, politics, and technology. The current practice of agriculture is another factor causing depletion of natural resources. For example, the depletion of nutrients in the soil due to excessive use of nitrogen and desertification. The depletion of natural resources is a continuing concern for society. This is seen in the cited quote given by Theodore Roosevelt, a well-known conservationist and former United States president, who was opposed to unregulated natural resource extraction. Protection In 1982, the United Nations developed the World Charter for Nature, which recognized the need to protect nature from further depletion due to human activity. It states that measures must be taken at all societal levels, from international to individual, to protect nature. It outlines the need for sustainable use of natural resources and suggests that the protection of resources should be incorporated into national and international systems of law. To look at the importance of protecting natural resources further, the World Ethic of Sustainability, developed by the IUCN, WWF and the UNEP in 1990, set out eight values for sustainability, including the need to protect natural resources from depletion. Since the development of these documents, many measures have been taken to protect natural resources including establishment of the scientific field and practice of conservation biology and habitat conservation, respectively. Conservation biology is the scientific study of the nature and status of Earth's biodiversity with the aim of protecting species, their habitats, and ecosystems from excessive rates of extinction. It is an interdisciplinary subject drawing on science, economics and the practice of natural resource management. The term conservation biology was introduced as the title of a conference held at the University of California, San Diego, in La Jolla, California, in 1978, organized by biologists Bruce A. Wilcox and Michael E. Soulé. Habitat conservation is a type of land management that seeks to conserve, protect and restore habitat areas for wild plants and animals, especially conservation reliant species, and prevent their extinction, fragmentation or reduction in range. Management Natural resource management is a discipline in the management of natural resources such as land, water, soil, plants, and animals—with a particular focus on how management affects quality of life for present and future generations. Hence, sustainable development is followed according to the judicious use of resources to supply present and future generations. The disciplines of fisheries, forestry, and wildlife are examples of large subdisciplines of natural resource management. Management of natural resources involves identifying who has the right to use the resources and who does not to define the management boundaries of the resource. The resources may be managed by the users according to the rules governing when and how the resource is used depending on local condition or the resources may be managed by a governmental organization or other central authority. A "...successful management of natural resources depends on freedom of speech, a dynamic and wide-ranging public debate through multiple independent media channels and an active civil society engaged in natural resource issues..." because of the nature of the shared resources, the individuals who are affected by the rules can participate in setting or changing them. The users have rights to devise their own management institutions and plans under the recognition by the government. The right to resources includes land, water, fisheries, and pastoral rights. The users or parties accountable to the users have to actively monitor and ensure the utilisation of the resource compliance with the rules and impose penalties on those people who violate the rules. These conflicts are resolved quickly and efficiently by the local institution according to the seriousness and context of the offense. The global science-based platform to discuss natural resources management is the World Resources Forum, based in Switzerland.
Physical sciences
Basics_5
null
21689
https://en.wikipedia.org/wiki/NTSC
NTSC
NTSC (from National Television System Committee) is the first American standard for analog television, published and adopted in 1941. In 1961, it was assigned the designation System M. It is also known as EIA standard 170. In 1953, a second NTSC standard was adopted, which allowed for color television broadcast compatible with the existing stock of black-and-white receivers. It is one of three major color formats for analog television, the others being PAL and SECAM. NTSC color is usually associated with the System M; this combination is sometimes called NTSC II. The only other broadcast television system to use NTSC color was the System J. Brazil used System M with PAL color. Vietnam, Cambodia and Laos used System M with SECAM color - Vietnam later started using PAL in the early 1990s. The NTSC/System M standard was used in most of the Americas (except Argentina, Brazil, Paraguay, and Uruguay), Myanmar, South Korea, Taiwan, Philippines, Japan, and some Pacific Islands nations and territories (see map). Since the introduction of digital sources (ex: DVD) the term NTSC has been used to refer to digital formats with number of active lines between 480 and 487 having 30 or 29.97 frames per second rate, serving as a digital shorthand to System M. The so-called NTSC-Film standard has a digital standard resolution of 720 × 480 pixel for DVD-Videos, 480 × 480 pixel for Super Video CDs (SVCD, Aspect Ratio: 4:3) and 352 × 240 pixel for Video CDs (VCD). The digital video (DV) camcorder format that is equivalent to NTSC is 720 × 480 pixels. The digital television (DTV) equivalent is 704 × 480 pixels. History The National Television System Committee was established in 1940 by the United States Federal Communications Commission (FCC) to resolve the conflicts between companies over the introduction of a nationwide analog television system in the United States. In March 1941, the committee issued a technical standard for black-and-white television that built upon a 1936 recommendation made by the Radio Manufacturers Association (RMA). Technical advancements of the vestigial side band technique allowed for the opportunity to increase the image resolution. The NTSC selected 525 scan lines as a compromise between RCA's 441-scan line standard (already being used by RCA's NBC TV network) and Philco's and DuMont's desire to increase the number of scan lines to between 605 and 800. The standard recommended a frame rate of 30 frames (images) per second, consisting of two interlaced fields per frame at 262.5 lines per field and 60 fields per second. Other standards in the final recommendation were an aspect ratio of 4:3, and frequency modulation (FM) for the sound signal (which was quite new at the time). In January 1950, the committee was reconstituted to standardize color television. The FCC had briefly approved a 405-line field-sequential color television standard in October 1950, which was developed by CBS. The CBS system was incompatible with existing black-and-white receivers. It used a rotating color wheel, reduced the number of scan lines from 525 to 405, and increased the field rate from 60 to 144, but had an effective frame rate of only 24 frames per second. Legal action by rival RCA kept commercial use of the system off the air until June 1951, and regular broadcasts only lasted a few months before manufacture of all color television sets was banned by the Office of Defense Mobilization in October, ostensibly due to the Korean War. A variant of the CBS system was later used by NASA to broadcast pictures of astronauts from space. CBS rescinded its system in March 1953, and the FCC replaced it on December 17, 1953, with the NTSC color standard, which was cooperatively developed by several companies, including RCA and Philco. In December 1953, the FCC unanimously approved what is now called the NTSC color television standard (later defined as RS-170a). The compatible color standard retained full backward compatibility with then-existing black-and-white television sets. Color information was added to the black-and-white image by introducing a color subcarrier of precisely 315/88 MHz (usually described as 3.579545 MHz±10 Hz). The precise frequency was chosen so that horizontal line-rate modulation components of the chrominance signal fall exactly in between the horizontal line-rate modulation components of the luminance signal, such that the chrominance signal could easily be filtered out of the luminance signal on new television sets, and that it would be minimally visible in existing televisions. Due to limitations of frequency divider circuits at the time the color standard was promulgated, the color subcarrier frequency was constructed as composite frequency assembled from small integers, in this case 5×7×9/(8×11) MHz. The horizontal line rate was reduced to approximately 15,734 lines per second (3.579545×2/455 MHz = 9/572 MHz) from 15,750 lines per second, and the frame rate was reduced to 30/1.001 ≈ 29.970 frames per second (the horizontal line rate divided by 525 lines/frame) from 30 frames per second. These changes amounted to 0.1 percent and were readily tolerated by then-existing television receivers. The first publicly announced network television broadcast of a program using the NTSC "compatible color" system was an episode of NBC's Kukla, Fran and Ollie on August 30, 1953, although it was viewable in color only at the network's headquarters. The first nationwide viewing of NTSC color came on the following January 1 with the coast-to-coast broadcast of the Tournament of Roses Parade, viewable on prototype color receivers at special presentations across the country. The first color NTSC television camera was the RCA TK-40, used for experimental broadcasts in 1953; an improved version, the TK-40A, introduced in March 1954, was the first commercially available color television camera. Later that year, the improved TK-41 became the standard camera used throughout much of the 1960s. The NTSC standard has been adopted by other countries, including some in the Americas and Japan. Digital conversion With the advent of digital television, analog broadcasts were largely phased out. Most US NTSC broadcasters were required by the FCC to shut down their analog transmitters by February 17, 2009, however this was later moved to June 12, 2009. Low-power stations, Class A stations and translators were required to shut down by 2015, although an FCC extension allowed some of those stations operating on Channel 6 to operate until July 13, 2021. The remaining Canadian analog TV transmitters, in markets not subject to the mandatory transition in 2011, were scheduled to be shut down by January 14, 2022, under a schedule published by Innovation, Science and Economic Development Canada in 2017; however the scheduled transition dates have already passed for several stations listed that continue to broadcast in analog (e.g. CFJC-TV Kamloops, which has not yet transitioned to digital, is listed as having been required to transition by November 20, 2020). Most countries using the NTSC standard, as well as those using other analog television standards, have switched to, or are in process of switching to, newer digital television standards, with there being at least four different standards in use around the world. North America, parts of Central America, and South Korea are adopting or have adopted the ATSC standards, while other countries, such as Japan, are adopting or have adopted other standards instead of ATSC. After nearly 70 years, the majority of over-the-air NTSC transmissions in the United States ceased on June 12, 2009, and by August 31, 2011, in Canada and most other NTSC markets. The majority of NTSC transmissions ended in Japan on July 24, 2011, with the Japanese prefectures of Iwate, Miyagi, and Fukushima ending the next year. After a pilot program in 2013, most full-power analog stations in Mexico left the air on ten dates in 2015, with some 500 low-power and repeater stations allowed to remain in analog until the end of 2016. Digital broadcasting allows higher-resolution television, but digital standard definition television continues to use the frame rate and number of lines of resolution established by the analog NTSC standard. Technical details Resolution and refresh rate NTSC color encoding is used with the System M television signal, which consists of  (approximately 29.97) interlaced frames of video per second. Each frame is composed of two fields, each consisting of 262.5 scan lines, for a total of 525 scan lines. The visible raster is made up of 486 scan lines. The later digital standard, Rec. 601, only uses 480 of these lines for visible raster. The remainder (the vertical blanking interval) allow for vertical synchronization and retrace. This blanking interval was originally designed to simply blank the electron beam of the receiver's CRT to allow for the simple analog circuits and slow vertical retrace of early TV receivers. However, some of these lines may now contain other data such as closed captioning and vertical interval timecode (VITC). In the complete raster (disregarding half lines due to interlacing) the even-numbered scan lines (every other line that would be even if counted in the video signal, e.g. {2, 4, 6, ..., 524}) are drawn in the first field, and the odd-numbered (every other line that would be odd if counted in the video signal, e.g. {1, 3, 5, ..., 525}) are drawn in the second field, to yield a flicker-free image at the field refresh frequency of  Hz (approximately 59.94 Hz). For comparison, 625 lines (576 visible) systems, usually used with PAL-B/G and SECAM color, and so have a higher vertical resolution, but a lower temporal resolution of 25 frames or 50 fields per second. The NTSC field refresh frequency in the black-and-white system originally exactly matched the nominal 60 Hz frequency of alternating current power used in the United States. Matching the field refresh rate to the power source avoided intermodulation (also called beating), which produces rolling bars on the screen. Synchronization of the refresh rate to the power incidentally helped kinescope cameras record early live television broadcasts, as it was very simple to synchronize a film camera to capture one frame of video on each film frame by using the alternating current frequency to set the speed of the synchronous AC motor-drive camera. This, as mentioned, is how the NTSC field refresh frequency worked in the original black-and-white system; when color was added to the system, however, the refresh frequency was shifted slightly downward by 0.1%, to approximately 59.94 Hz, to eliminate stationary dot patterns in the difference frequency between the sound and color carriers (as explained below in §Color encoding). By the time the frame rate changed to accommodate color, it was nearly as easy to trigger the camera shutter from the video signal itself. The actual figure of 525 lines was chosen as a consequence of the limitations of the vacuum-tube-based technologies of the day. In early TV systems, a master voltage-controlled oscillator was run at twice the horizontal line frequency, and this frequency was divided down by the number of lines used (in this case 525) to give the field frequency (60 Hz in this case). This frequency was then compared with the 60 Hz power-line frequency and any discrepancy corrected by adjusting the frequency of the master oscillator. For interlaced scanning, an odd number of lines per frame was required in order to make the vertical retrace distance identical for the odd and even fields, which meant the master oscillator frequency had to be divided down by an odd number. At the time, the only practical method of frequency division was the use of a chain of vacuum tube multivibrators, the overall division ratio being the mathematical product of the division ratios of the chain. Since all the factors of an odd number also have to be odd numbers, it follows that all the dividers in the chain also had to divide by odd numbers, and these had to be relatively small due to the problems of thermal drift with vacuum tube devices. The closest practical sequence to 500 that meets these criteria was . (For the same reason, 625-line PAL-B/G and SECAM uses , the old British 405-line system used , the French 819-line system used etc.) Colorimetry Colorimetry refers to the specific colorimetric characteristics of the system and its components, including the specific primary colors used, the camera, the display, etc. Over its history, NTSC color had two distinctly defined colorimetries, shown on the accompanying chromaticity diagram as NTSC 1953 and SMPTE C. Manufacturers introduced a number of variations for technical, economic, marketing, and other reasons. NTSC 1953 The original 1953 color NTSC specification, still part of the United States Code of Federal Regulations, defined the colorimetric values of the system as shown in the above table. Early color television receivers, such as the RCA CT-100, were faithful to this specification (which was based on prevailing motion picture standards), having a larger gamut than most of today's monitors. Their low-efficiency phosphors (notably in the Red) were weak and long-persistent, leaving trails after moving objects. Starting in the late 1950s, picture tube phosphors would sacrifice saturation for increased brightness; this deviation from the standard at both the receiver and broadcaster was the source of considerable color variation. SMPTE C To ensure more uniform color reproduction, some manufacturers incorporated color correction circuits into sets, that converted the received signal—encoded for the colorimetric values listed above—adjusting for the actual phosphor characteristics used within the monitor. Since such color correction can not be performed accurately on the nonlinear gamma corrected signals transmitted, the adjustment can only be approximated, introducing both hue and luminance errors for highly saturated colors. Similarly at the broadcaster stage, in 1968–69 the Conrac Corp., working with RCA, defined a set of controlled phosphors for use in broadcast color picture video monitors. This specification survives today as the SMPTE C phosphor specification: As with home receivers, it was further recommended that studio monitors incorporate similar color correction circuits so that broadcasters would transmit pictures encoded for the original 1953 colorimetric values, in accordance with FCC standards. In 1987, the Society of Motion Picture and Television Engineers (SMPTE) Committee on Television Technology, Working Group on Studio Monitor Colorimetry, adopted the SMPTE C (Conrac) phosphors for general use in Recommended Practice 145, prompting many manufacturers to modify their camera designs to directly encode for SMPTE C colorimetry without color correction, as approved in SMPTE standard 170M, "Composite Analog Video Signal – NTSC for Studio Applications" (1994). As a consequence, the ATSC digital television standard states that for 480i signals, SMPTE C colorimetry should be assumed unless colorimetric data is included in the transport stream. Japanese NTSC never changed primaries and whitepoint to SMPTE C, continuing to use the 1953 NTSC primaries and whitepoint. Both the PAL and SECAM systems used the original 1953 NTSC colorimetry as well until 1970; unlike NTSC, however, the European Broadcasting Union (EBU) rejected color correction in receivers and studio monitors that year and instead explicitly called for all equipment to directly encode signals for the "EBU" colorimetric values. Color compatibility issues In reference to the gamuts shown on the CIE chromaticity diagram (above), the variations between the different colorimetries can result in significant visual differences. To adjust for proper viewing requires gamut mapping via LUTs or additional color grading. SMPTE Recommended Practice RP 167-1995 refers to such an automatic correction as an "NTSC corrective display matrix." For instance, material prepared for 1953 NTSC may look desaturated when displayed on SMPTE C or ATSC/BT.709 displays, and may also exhibit noticeable hue shifts. On the other hand, SMPTE C materials may appear slightly more saturated on BT.709/sRGB displays, or significantly more saturated on P3 displays, if the appropriate gamut mapping is not performed. Color encoding NTSC uses a luminance-chrominance encoding system, incorporating concepts invented in 1938 by Georges Valensi. Using a separate luminance signal maintained backward compatibility with black-and-white television sets in use at the time; only color sets would recognize the chroma signal, which was essentially ignored by black and white sets. The red, green, and blue primary color signals are weighted and summed into a single luma signal, designated (Y prime) which takes the place of the original monochrome signal. The color difference information is encoded into the chrominance signal, which carries only the color information. This allows black-and-white receivers to display NTSC color signals by simply ignoring the chrominance signal. Some black-and-white TVs sold in the U.S. after the introduction of color broadcasting in 1953 were designed to filter chroma out, but the early B&W sets did not do this and chrominance could be seen as a crawling dot pattern in areas of the picture that held saturated colors. To derive the separate signals containing only color information, the difference is determined between each color primary and the summed luma. Thus the red difference signal is and the blue difference signal is . These difference signals are then used to derive two new color signals known as (in-phase) and (in quadrature) in a process called QAM. The color space is rotated relative to the difference signal color space, such that orange-blue color information (which the human eye is most sensitive to) is transmitted on the signal at 1.3 MHz bandwidth, while the signal encodes purple-green color information at 0.4 MHz bandwidth; this allows the chrominance signal to use less overall bandwidth without noticeable color degradation. The two signals each amplitude modulate 3.58 MHz carriers which are 90 degrees out of phase with each other and the result added together but with the carriers themselves being suppressed. The result can be viewed as a single sine wave with varying phase relative to a reference carrier and with varying amplitude. The varying phase represents the instantaneous color hue captured by a TV camera, and the amplitude represents the instantaneous color saturation. The  MHz subcarrier is then added to the Luminance to form the composite color signal which modulates the video signal carrier. 3.58 MHz is often stated as an abbreviation instead of 3.579545 MHz. For a color TV to recover hue information from the color subcarrier, it must have a zero-phase reference to replace the previously suppressed carrier. The NTSC signal includes a short sample of this reference signal, known as the colorburst, located on the back porch of each horizontal synchronization pulse. The color burst consists of a minimum of eight cycles of the unmodulated (pure original) color subcarrier. The TV receiver has a local oscillator, which is synchronized with these color bursts to create a reference signal. Combining this reference phase signal with the chrominance signal allows the recovery of the and signals, which in conjunction with the signal, is reconstructed to the individual signals, that are then sent to the CRT to form the image. In CRT televisions, the NTSC signal is turned into three color signals: red, green, and blue, each controlling an electron gun that is designed to excite only the corresponding red, green, or blue phosphor dots. TV sets with digital circuitry use sampling techniques to process the signals but the result is the same. For both analog and digital sets processing an analog NTSC signal, the original three color signals are transmitted using three discrete signals (Y, I and Q) and then recovered as three separate colors (R, G, and B) and presented as a color image. When a transmitter broadcasts an NTSC signal, it amplitude-modulates a radio-frequency carrier with the NTSC signal just described, while it frequency-modulates a carrier 4.5 MHz higher with the audio signal. If non-linear distortion happens to the broadcast signal, the  MHz color carrier may beat with the sound carrier to produce a dot pattern on the screen. To make the resulting pattern less noticeable, designers adjusted the original 15,750 Hz scanline rate down by a factor of 1.001 (%) to match the audio carrier frequency divided by the factor 286, resulting in a field rate of approximately 59.94 Hz. This adjustment ensures that the difference between the sound carrier and the color subcarrier (the most problematic intermodulation product of the two carriers) is an odd multiple of half the line rate, which is the necessary condition for the dots on successive lines to be opposite in phase, making them least noticeable. The 59.94 rate is derived from the following calculations. Designers chose to make the chrominance subcarrier frequency an n + 0.5 multiple of the line frequency to minimize interference between the luminance signal and the chrominance signal. (Another way this is often stated is that the color subcarrier frequency is an odd multiple of half the line frequency.) They then chose to make the audio subcarrier frequency an integer multiple of the line frequency to minimize visible (intermodulation) interference between the audio signal and the chrominance signal. The original black-and-white standard, with its 15,750 Hz line frequency and 4.5 MHz audio subcarrier, does not meet these requirements, so designers had to either raise the audio subcarrier frequency or lower the line frequency. Raising the audio subcarrier frequency would prevent existing (black and white) receivers from properly tuning in the audio signal. Lowering the line frequency is comparatively innocuous, because the horizontal and vertical synchronization information in the NTSC signal allows a receiver to tolerate a substantial amount of variation in the line frequency. So the engineers chose the line frequency to be changed for the color standard. In the black-and-white standard, the ratio of audio subcarrier frequency to line frequency is   . In the color standard, this becomes rounded to the integer 286, which means the color standard's line rate is  ≈  Hz. Maintaining the same number of scan lines per field (and frame), the lower line rate must yield a lower field rate. Dividing lines per second by 262.5 lines per field gives approximately 59.94 fields per second. Transmission modulation method An NTSC television channel as transmitted occupies a total bandwidth of 6 MHz. The actual video signal, which is amplitude-modulated, is transmitted between 500 kHz and 5.45 MHz above the lower bound of the channel. The video carrier is 1.25 MHz above the lower bound of the channel. Like most AM signals, the video carrier generates two sidebands, one above the carrier and one below. The sidebands are each 4.2 MHz wide. The entire upper sideband is transmitted, but only 1.25 MHz of the lower sideband, known as a vestigial sideband, is transmitted. The color subcarrier, as noted above, is 3.579545 MHz above the video carrier, and is quadrature-amplitude-modulated with a suppressed carrier. The audio signal is frequency-modulated, like the audio signals broadcast by FM radio stations in the 88–108 MHz band, but with a 25 kHz maximum frequency deviation, as opposed to 75 kHz as is used on the FM band, making analog television audio signals sound quieter than FM radio signals as received on a wideband receiver. The main audio carrier is 4.5 MHz above the video carrier, making it 250 kHz below the top of the channel. Sometimes a channel may contain an MTS signal, which offers more than one audio signal by adding one or two subcarriers on the audio signal, each synchronized to a multiple of the line frequency. This is normally the case when stereo audio and/or second audio program signals are used. The same extensions are used in ATSC, where the ATSC digital carrier is broadcast at 0.31 MHz above the lower bound of the channel. "Setup" is a 54 mV (7.5 IRE) voltage offset between the "black" and "blanking" levels. It is unique to NTSC. CVBS stands for Color, Video, Blanking, and Sync. The following table shows the values for the basic RGB colors, encoded in NTSC Frame rate conversion There is a large difference in frame rate between film, which runs at 24 frames per second, and the NTSC standard, which runs at approximately 29.97 (10 MHz×63/88/455/525) frames per second. In regions that use 25-fps television and video standards, this difference can be overcome by speed-up. For 30-fps standards, a process called "3:2 pulldown" is used. One film frame is transmitted for three video fields (lasting  video frames), and the next frame is transmitted for two video fields (lasting 1 video frame). Two film frames are thus transmitted in five video fields, for an average of  video fields per film frame. The average frame rate is thus 60 ÷ 2.5 = 24 frames per second, so the average film speed is nominally exactly what it should be. (In reality, over the course of an hour of real time, 215,827.2 video fields are displayed, representing 86,330.88 frames of film, while in an hour of true 24-fps film projection, exactly 86,400 frames are shown: thus, 29.97-fps NTSC transmission of 24-fps film runs at 99.92% of the film's normal speed.) Still-framing on playback can display a video frame with fields from two different film frames, so any difference between the frames will appear as a rapid back-and-forth flicker. There can also be noticeable jitter/"stutter" during slow camera pans (telecine judder). Film shot specifically for NTSC television is usually taken at 30 (instead of 24) frames per second to avoid 3:2 pulldown. To show 25-fps material (such as European television series and some European movies) on NTSC equipment, every fifth frame is duplicated and then the resulting stream is interlaced. Film shot for NTSC television at 24 frames per second has traditionally been accelerated by 1/24 (to about 104.17% of normal speed) for transmission in regions that use 25-fps television standards. This increase in picture speed has traditionally been accompanied by a similar increase in the pitch and tempo of the audio. More recently, frame-blending has been used to convert 24 FPS video to 25 FPS without altering its speed. Film shot for television in regions that use 25-fps television standards can be handled in either of two ways: The film can be shot at 24 frames per second. In this case, when transmitted in its native region, the film may be accelerated to 25 fps according to the analog technique described above, or kept at 24 fps by the digital technique described above. When the same film is transmitted in regions that use a nominal 30-fps television standard, there is no noticeable change in speed, tempo, and pitch. The film can be shot at 25 frames per second. In this case, when transmitted in its native region, the film is shown at its normal speed, with no alteration of the accompanying soundtrack. When the same film is shown in regions that use a 30-fps nominal television standard, every fifth frame is duplicated, and there is still no noticeable change in speed, tempo, and pitch. Because both film speeds have been used in 25-fps regions, viewers can face confusion about the true speed of video and audio, and the pitch of voices, sound effects, and musical performances, in television films from those regions. For example, they may wonder whether the Jeremy Brett series of Sherlock Holmes television films, made in the 1980s and early 1990s, was shot at 24 fps and then transmitted at an artificially fast speed in 25-fps regions, or whether it was shot at 25 fps natively and then slowed to 24 fps for NTSC exhibition. These discrepancies exist not only in television broadcasts over the air and through cable, but also in the home-video market, on both tape and disc, including laser disc and DVD. In digital television and video, which are replacing their analog predecessors, single standards that can accommodate a wider range of frame rates still show the limits of analog regional standards. The initial version of the ATSC standard, for example, allowed frame rates of 23.976, 24, 29.97, 30, 59.94, 60, 119.88 and 120 frames per second, but not 25 and 50. Modern ATSC allows 25 and 50 FPS. Modulation for analog satellite transmission Because satellite power is severely limited, analog video transmission through satellites differs from terrestrial TV transmission. AM is a linear modulation method, so a given demodulated signal-to-noise ratio (SNR) requires an equally high received RF SNR. The SNR of studio quality video is over 50 dB, so AM would require prohibitively high powers and/or large antennas. Wideband FM is used instead to trade RF bandwidth for reduced power. Increasing the channel bandwidth from 6 to 36 MHz allows a RF SNR of only 10 dB or less. The wider noise bandwidth reduces this 40 dB power saving by 36 MHz / 6 MHz = 8 dB for a substantial net reduction of 32 dB. Sound is on an FM subcarrier as in terrestrial transmission, but frequencies above 4.5 MHz are used to reduce aural/visual interference. 6.8, 5.8 and 6.2 MHz are commonly used. Stereo can be multiplex, discrete, or matrix and unrelated audio and data signals may be placed on additional subcarriers. A triangular 60 Hz energy dispersal waveform is added to the composite baseband signal (video plus audio and data subcarriers) before modulation. This limits the satellite downlink power spectral density in case the video signal is lost. Otherwise the satellite might transmit all of its power on a single frequency, interfering with terrestrial microwave links in the same frequency band. In half transponder mode, the frequency deviation of the composite baseband signal is reduced to 18 MHz to allow another signal in the other half of the 36 MHz transponder. This reduces the FM benefit somewhat, and the recovered SNRs are further reduced because the combined signal power must be "backed off" to avoid intermodulation distortion in the satellite transponder. A single FM signal is constant amplitude, so it can saturate a transponder without distortion. Field order An NTSC frame consists of two fields, F1 (field one) and F2 (field two). The field dominance depends on a combination of factors, including decisions by various equipment manufacturers as well as historical conventions. As a result, most professional equipment has the option to switch between a dominant upper or dominant lower field. It is not advisable to use the terms even or odd when speaking of fields, due to substantial ambiguity. For instance if the line numbering for a particular system starts at zero, while another system starts its line numbering at one. As such the same field could be even or odd. While an analog television set does not care about field dominance per se, field dominance is important when editing NTSC video. Incorrect interpretation of field order can cause a shuddering effect as moving objects jump forward and behind on each successive field. This is of particular importance when interlaced NTSC is transcoded to a format with a different field dominance and vice versa. Field order is also important when transcoding progressive video to interlaced NTSC, as any place there is a cut between two scenes in the progressive video, there could be a flash field in the interlaced video if the field dominance is incorrect. The film telecine process where a three-two pull down is utilized to convert 24 frames to 30, will also provide unacceptable results if the field order is incorrect. Because each field is temporally unique for material captured with an interlaced camera, converting interlaced to a digital progressive-frame medium is difficult, as each progressive frame will have artifacts of motion on every alternating line. This can be observed in PC-based video-playing utilities and is frequently solved simply by transcoding the video at half resolution and only using one of the two available fields. Variants NTSC-M Unlike PAL and SECAM, with its many varied underlying broadcast television systems in use throughout the world, NTSC color encoding is almost invariably used with broadcast system M, giving NTSC-M. NTSC-N and NTSC 50 NTSC-N was originally proposed in the 1960s to the CCIR as a 50 Hz broadcast method for System N countries Paraguay, Uruguay and Argentina before they chose PAL. In 1978, with the introduction of Apple II Europlus, it was effectively reintroduced as "NTSC 50", a pseudo-system combining 625-line video with 3.58 MHz NTSC color. For example, an Atari ST running PAL software on their NTSC color display used this system as the monitor could not decode PAL color. Most analog NTSC television sets and monitors with a V-Hold knob can display this system after adjusting the vertical hold. NTSC-J Only Japan's variant "NTSC-J" is slightly different: in Japan, black level and blanking level of the signal are identical (at 0 IRE), as they are in PAL, while in American NTSC, black level is slightly higher (7.5 IRE) than blanking level. Since the difference is quite small, a slight turn of the brightness knob is all that is required to correctly show the "other" variant of NTSC on any set as it is supposed to be; most watchers might not even notice the difference in the first place. The channel encoding on NTSC-J differs slightly from NTSC-M. In particular, the Japanese VHF band runs from channels 1–12 (located on frequencies directly above the 76–90 MHz Japanese FM radio band) while the North American VHF TV band uses channels 2–13 (54–72 MHz, 76–88 MHz and 174–216 MHz) with 88–108 MHz allocated to FM radio broadcasting. Japan's UHF TV channels are therefore numbered from 13 up and not 14 up, but otherwise uses the same UHF broadcasting frequencies as those in North America. NTSC 4.43 NTSC 4.43 is a pseudo-system that transmits a NTSC color subcarrier of 4.43 MHz instead of 3.58 MHz The resulting output is only viewable by TVs that support the resulting pseudo-system (such as most PAL TVs). Using a native NTSC TV to decode the signal yields no color, while using an incompatible PAL TV to decode the system yields erratic colors (observed to be lacking red and flickering randomly). The format was used by the USAF TV based in Germany during the Cold War and Hong Kong Cable Television. It was also found as an optional output on some LaserDisc players sold in markets where the PAL system is used. The NTSC 4.43 system, while not a broadcast format, appears most often as a playback function of PAL cassette format VCRs, beginning with the Sony 3/4" U-Matic format and then following onto Betamax and VHS format machines, commonly advertised as "NTSC playback on PAL TV". Multi-standard video monitors were already in use in Europe to accommodate broadcast sources in PAL, SECAM, and NTSC video formats. The heterodyne color-under process of U-Matic, Betamax & VHS lent itself to minor modification of VCR players to accommodate NTSC format cassettes. The color-under format of VHS uses a 629 kHz subcarrier while U-Matic & Betamax use a 688 kHz subcarrier to carry an amplitude modulated chroma signal for both NTSC and PAL formats. Since the VCR was ready to play the color portion of the NTSC recording using PAL color mode, the PAL scanner and capstan speeds had to be adjusted from PAL's 50 Hz field rate to NTSC's 59.94 Hz field rate, and faster linear tape speed. The changes to the PAL VCR are minor thanks to the existing VCR recording formats. The output of the VCR when playing an NTSC cassette in NTSC 4.43 mode is 525 lines/29.97 frames per second with PAL compatible heterodyned color. The multi-standard receiver is already set to support the NTSC H & V frequencies; it just needs to do so while receiving PAL color. The existence of those multi-standard receivers was probably part of the drive for region coding of DVDs. As the color signals are component on disc for all display formats, almost no changes would be required for PAL DVD players to play NTSC (525/29.97) discs as long as the display was frame-rate compatible. OSKM (USSR-NTSC) In January 1960, (7 years prior to adoption of the modified SECAM version) the experimental TV studio in Moscow started broadcasting using the OSKM system. OSKM was the version of NTSC adapted to European D/K 625/50 standard. The OSKM abbreviation means "Simultaneous system with quadrature modulation" (In Russian: Одновременная Система с Квадратурной Модуляцией). It used the color coding scheme that was later used in PAL (U and V instead of I and Q). The color subcarrier frequency was 4.4296875 MHz and the bandwidth of U and V signals was near 1.5 MHz. Only circa 4000 TV sets of 4 models (Raduga, Temp-22, Izumrud-201 and Izumrud-203) were produced for studying the real quality of TV reception. These TV's were not commercially available, despite being included in the goods catalog for trade network of the USSR. The broadcasting with this system lasted about 3 years and was ceased well before SECAM transmissions started in the USSR. None of the current multi-standard TV receivers can support this TV system. NTSC-film Film content commonly shot at 24 frames/s can be converted to 30 frames/s through the telecine process to duplicate frames as needed. Mathematically for NTSC this is relatively simple as it is only needed to duplicate every fourth frame. Various techniques are employed. NTSC with an actual frame rate of   (approximately 23.976) frames/s is often defined as NTSC-film. A process known as pullup, also known as pulldown, generates the duplicated frames upon playback. This method is common for H.262/MPEG-2 Part 2 digital video so the original content is preserved and played back on equipment that can display it or can be converted for equipment that cannot. Comparative quality For NTSC, and to a lesser extent, PAL, reception problems can degrade the color accuracy of the picture where ghosting can dynamically change the phase of the color burst with picture content, thus altering the color balance of the signal. The only receiver compensation is in the professional TV receiver ghost canceling circuits used by cable companies. The vacuum-tube electronics used in televisions through the 1960s led to various technical problems. Among other things, the color burst phase would often drift. In addition, the TV studios did not always transmit properly, leading to hue changes when channels were changed, which is why NTSC televisions were equipped with a tint control. PAL and SECAM televisions had less of a need for one. SECAM in particular was very robust, but PAL, while excellent in maintaining skin tones which viewers are particularly sensitive to, nevertheless would distort other colors in the face of phase errors. With phase errors, only "Deluxe PAL" receivers would get rid of "Hanover bars" distortion. Hue controls are still found on NTSC TVs, but color drifting generally ceased to be a problem for more modern circuitry by the 1970s. When compared to PAL, in particular, NTSC color accuracy and consistency were sometimes considered inferior, leading to video professionals and television engineers jokingly referring to NTSC as Never The Same Color, Never Twice the Same Color, or No True Skin Colors, while for the more expensive PAL system it was necessary to Pay for Additional Luxury. The use of NTSC coded color in S-Video systems, as well as the use of closed-circuit composite NTSC, both eliminate the phase distortions because there is no reception ghosting in a closed-circuit system to smear the color burst. For VHS videotape on the horizontal axis and frame rate of the three color systems when used with this scheme, the use of S-Video gives the higher resolution picture quality on monitors and TVs without a high-quality motion-compensated comb filtering section. (The NTSC resolution on the vertical axis is lower than the European standards, 525 lines against 625.) However, it uses too much bandwidth for over-the-air transmission. The Atari 800 and Commodore 64 home computers generate S-video, but only when used with specially designed monitors as no TV at the time supported the separate chroma and luma on standard RCA jacks. In 1987, a standardized four-pin mini-DIN socket was introduced for S-video input with the introduction of S-VHS players, which were the first device produced to use the four-pin plugs. However, S-VHS never became very popular. Video game consoles in the 1990s began offering S-video output as well. Vertical interval reference The standard NTSC video image contains some lines (lines 1–21 of each field) that are not visible (this is known as the Vertical Blanking Interval, or VBI); all are beyond the edge of the viewable image, but only lines 1–9 are used for the vertical-sync and equalizing pulses. The remaining lines were deliberately blanked in the original NTSC specification to provide time for the electron beam in CRT screens to return to the top of the display. VIR (or Vertical interval reference), widely adopted in the 1980s, attempts to correct some of the color problems with NTSC video by adding studio-inserted reference data for luminance and chrominance levels on line 19. Suitably equipped television sets could then employ these data in order to adjust the display to a closer match of the original studio image. The actual VIR signal contains three sections, the first having 70 percent luminance and the same chrominance as the color burst signal, and the other two having 50 percent and 7.5 percent luminance respectively. A less-used successor to VIR, GCR, also added ghost (multipath interference) removal capabilities. The remaining vertical blanking interval lines are typically used for datacasting or ancillary data such as video editing timestamps (vertical interval timecodes or SMPTE timecodes on lines 12–14), test data on lines 17–18, a network source code on line 20 and closed captioning, XDS, and V-chip data on line 21. Early teletext applications also used vertical blanking interval lines 14–18 and 20, but teletext over NTSC was never widely adopted by viewers. Many stations transmit TV Guide On Screen (TVGOS) data for an electronic program guide on VBI lines. The primary station in a market will broadcast 4 lines of data, and backup stations will broadcast 1 line. In most markets the PBS station is the primary host. TVGOS data can occupy any line from 10–25, but in practice its limited to 11–18, 20 and line 22. Line 22 is only used for 2 broadcast, DirecTV and CFPL-TV. TiVo data is also transmitted on some commercials and program advertisements so that customers can autorecord the program being advertised, and is also used in weekly half-hour paid programs on Ion Television and the Discovery Channel which highlight TiVo promotions and advertisers. Countries and territories that are using or once used NTSC Below are countries and territories that currently use or once used the NTSC system. Many of these have switched or are currently switching from NTSC to digital television standards such as ATSC (United States, Canada, Mexico, Suriname, Jamaica, South Korea, Saint Lucia, Bahamas, Barbados, Grenada, Antigua and Barbuda, Haiti), ISDB (Japan, Philippines, part of South America and Saint Kitts and Nevis), DVB-T (Taiwan, Panama, Colombia, Myanmar, and Trinidad and Tobago) or DTMB (Cuba). (Over-the-air NTSC broadcasts (Channel 9) have been terminated as of March 2016, local broadcast stations have now switched to digital channels 20.1 and 20.2) (Over-the-air NTSC broadcasting in major cities ceased August 2011 as a result of legislative fiat, to be replaced with ATSC. Some one-station markets or markets served only by full-power repeaters remain analog.) (Analog shutoff occurred in 2024, now switching to ISDB-Tb) (Analog shutoff scheduled to 2022, simulcasting DVB-T) (NTSC broadcast to be abandoned by December 2018, simulcasting ISDB-Tb) (Over-the-air NTSC broadcasting scheduled to be abandoned by 2021, simulcast in ATSC) (Over-the-air NTSC broadcasting scheduled to be abandoned by December 2024, simulcast in ISDB-Tb) (Over-the-air NTSC broadcasting scheduled to be abandoned by December 2020, simulcast in ISDB-Tb) (Will convert to ATSC 3.0 instead of 1.0. The conversion will begin in 2022 and is expected to be completed by 2023) (fully switched to ISDB in 2012, after the 2011 Tōhoku earthquake and tsunami delayed the planned 2011 rollout in three prefectures) (in Compact of Free Association with US; US aid funded NTSC adoption) plans to transition from NTSC announced on July 2, 2004, started conversion in 2013 full transition was scheduled on December 31, 2015, but due to technical and economic issues for some transmitters — the full transition was extended to be completed on December 31, 2016. (in Compact of Free Association with US, transitioning to DVB-T) (a US military base) (also used PAL) (in Compact of Free Association with US; adopted NTSC before independence) (NTSC broadcasts to be abandoned by 2020, simulcasting DVB-T. NTSC broadcasts to be abandoned in areas with more than 90% of DVB-T reception) (NTSC broadcast to be abandoned by December 31, 2017, simulcasting ISDB-Tb) (NTSC broadcast was intended to be abandoned at the end of 2015. However, in later 2014 — it was postponed to 2019, and later extended to 2023. All analog broadcasts are expected to be shut off by the end of 2024 or by 2026. It will simulcast in ISDB-T.) (now uses ATSC) (existed on NTSC since March 5, 2024) (simulcast in NTSC, SECAM and PAL, before switching to PAL in the early 1990s) (also used 8 MHz spacing of DVB-T2 (same bandwidth spacing in European Netherlands) on Encrypted Terrestrial Digital TV subscription via WTN CABLE) (Will convert to ATSC 3.0 instead of 1.0. The conversion will begin in 2023 and is expected to be completed by 2026) (Full-power over-the-air NTSC broadcasting was switched off on June 12, 2009 in favor of ATSC. Low-power stations, Class A stations were switched off on September 1, 2015. Translators and other Low-power stations were supposed to transition on the same day Class-A stations shut off analog services but it was postponed to July 13, 2021, due to a spectrum auction. Most remaining analog cable television systems are also unaffected) Experimented (Between 1962 and 1963, Rede Tupi and Rede Excelsior made the first unofficial transmissions in color, in specific programs in the city of São Paulo, before the official adoption of PAL-M by the Brazilian Government on February 19, 1972) (Experimented on 405-line variant of NTSC, then UK chose 625-line for PAL broadcasting.) Countries and territories that have ceased using NTSC The following countries and regions no longer use NTSC for terrestrial broadcasts.
Technology
Broadcasting
null
21690
https://en.wikipedia.org/wiki/Number
Number
A number is a mathematical object used to count, measure, and label. The most basic examples are the natural numbers 1, 2, 3, 4, and so forth. Numbers can be represented in language with number words. More universally, individual numbers can be represented by symbols, called numerals; for example, "5" is a numeral that represents the number five. As only a relatively small number of symbols can be memorized, basic numerals are commonly organized in a numeral system, which is an organized way to represent any number. The most common numeral system is the Hindu–Arabic numeral system, which allows for the representation of any non-negative integer using a combination of ten fundamental numeric symbols, called digits. In addition to their use in counting and measuring, numerals are often used for labels (as with telephone numbers), for ordering (as with serial numbers), and for codes (as with ISBNs). In common usage, a numeral is not clearly distinguished from the number that it represents. In mathematics, the notion of number has been extended over the centuries to include zero (0), negative numbers, rational numbers such as one half , real numbers such as the square root of 2 and , and complex numbers which extend the real numbers with a square root of (and its combinations with real numbers by adding or subtracting its multiples). Calculations with numbers are done with arithmetical operations, the most familiar being addition, subtraction, multiplication, division, and exponentiation. Their study or usage is called arithmetic, a term which may also refer to number theory, the study of the properties of numbers. Besides their practical uses, numbers have cultural significance throughout the world. For example, in Western society, the number 13 is often regarded as unlucky, and "a million" may signify "a lot" rather than an exact quantity. Though it is now regarded as pseudoscience, belief in a mystical significance of numbers, known as numerology, permeated ancient and medieval thought. Numerology heavily influenced the development of Greek mathematics, stimulating the investigation of many problems in number theory which are still of interest today. During the 19th century, mathematicians began to develop many different abstractions which share certain properties of numbers, and may be seen as extending the concept. Among the first were the hypercomplex numbers, which consist of various extensions or modifications of the complex number system. In modern mathematics, number systems are considered important special examples of more general algebraic structures such as rings and fields, and the application of the term "number" is a matter of convention, without fundamental significance. History First use of numbers Bones and other artifacts have been discovered with marks cut into them that many believe are tally marks. These tally marks may have been used for counting elapsed time, such as numbers of days, lunar cycles or keeping records of quantities, such as of animals. A tallying system has no concept of place value (as in modern decimal notation), which limits its representation of large numbers. Nonetheless, tallying systems are considered the first kind of abstract numeral system. The first known system with place value was the Mesopotamian base 60 system ( BC) and the earliest known base 10 system dates to 3100 BC in Egypt. Numerals Numbers should be distinguished from numerals, the symbols used to represent numbers. The Egyptians invented the first ciphered numeral system, and the Greeks followed by mapping their counting numbers onto Ionian and Doric alphabets. Roman numerals, a system that used combinations of letters from the Roman alphabet, remained dominant in Europe until the spread of the superior Hindu–Arabic numeral system around the late 14th century, and the Hindu–Arabic numeral system remains the most common system for representing numbers in the world today. The key to the effectiveness of the system was the symbol for zero, which was developed by ancient Indian mathematicians around 500 AD. Zero The first known recorded use of zero dates to AD 628, and appeared in the Brāhmasphuṭasiddhānta, the main work of the Indian mathematician Brahmagupta. He treated 0 as a number and discussed operations involving it, including division by zero. By this time (the 7th century), the concept had clearly reached Cambodia in the form of Khmer numerals, and documentation shows the idea later spreading to China and the Islamic world. Brahmagupta's Brāhmasphuṭasiddhānta is the first book that mentions zero as a number, hence Brahmagupta is usually considered the first to formulate the concept of zero. He gave rules of using zero with negative and positive numbers, such as "zero plus a positive number is a positive number, and a negative number plus zero is the negative number". The Brāhmasphuṭasiddhānta is the earliest known text to treat zero as a number in its own right, rather than as simply a placeholder digit in representing another number as was done by the Babylonians or as a symbol for a lack of quantity as was done by Ptolemy and the Romans. The use of 0 as a number should be distinguished from its use as a placeholder numeral in place-value systems. Many ancient texts used 0. Babylonian and Egyptian texts used it. Egyptians used the word nfr to denote zero balance in double entry accounting. Indian texts used a Sanskrit word or to refer to the concept of void. In mathematics texts this word often refers to the number zero. In a similar vein, Pāṇini (5th century BC) used the null (zero) operator in the Ashtadhyayi, an early example of an algebraic grammar for the Sanskrit language (also see Pingala). There are other uses of zero before Brahmagupta, though the documentation is not as complete as it is in the Brāhmasphuṭasiddhānta. Records show that the Ancient Greeks seemed unsure about the status of 0 as a number: they asked themselves "How can 'nothing' be something?" leading to interesting philosophical and, by the Medieval period, religious arguments about the nature and existence of 0 and the vacuum. The paradoxes of Zeno of Elea depend in part on the uncertain interpretation of 0. (The ancient Greeks even questioned whether  was a number.) The late Olmec people of south-central Mexico began to use a symbol for zero, a shell glyph, in the New World, possibly by the but certainly by 40 BC, which became an integral part of Maya numerals and the Maya calendar. Maya arithmetic used base 4 and base 5 written as base 20. George I. Sánchez in 1961 reported a base 4, base 5 "finger" abacus. By 130 AD, Ptolemy, influenced by Hipparchus and the Babylonians, was using a symbol for 0 (a small circle with a long overbar) within a sexagesimal numeral system otherwise using alphabetic Greek numerals. Because it was used alone, not as just a placeholder, this Hellenistic zero was the first documented use of a true zero in the Old World. In later Byzantine manuscripts of his Syntaxis Mathematica (Almagest), the Hellenistic zero had morphed into the Greek letter Omicron (otherwise meaning 70). Another true zero was used in tables alongside Roman numerals by 525 (first known use by Dionysius Exiguus), but as a word, meaning nothing, not as a symbol. When division produced 0 as a remainder, , also meaning nothing, was used. These medieval zeros were used by all future medieval computists (calculators of Easter). An isolated use of their initial, N, was used in a table of Roman numerals by Bede or a colleague about 725, a true zero symbol. Negative numbers The abstract concept of negative numbers was recognized as early as 100–50 BC in China. The Nine Chapters on the Mathematical Art contains methods for finding the areas of figures; red rods were used to denote positive coefficients, black for negative. The first reference in a Western work was in the 3rd century AD in Greece. Diophantus referred to the equation equivalent to (the solution is negative) in Arithmetica, saying that the equation gave an absurd result. During the 600s, negative numbers were in use in India to represent debts. Diophantus' previous reference was discussed more explicitly by Indian mathematician Brahmagupta, in Brāhmasphuṭasiddhānta in 628, who used negative numbers to produce the general form quadratic formula that remains in use today. However, in the 12th century in India, Bhaskara gives negative roots for quadratic equations but says the negative value "is in this case not to be taken, for it is inadequate; people do not approve of negative roots". European mathematicians, for the most part, resisted the concept of negative numbers until the 17th century, although Fibonacci allowed negative solutions in financial problems where they could be interpreted as debts (chapter 13 of , 1202) and later as losses (in ). René Descartes called them false roots as they cropped up in algebraic polynomials yet he found a way to swap true roots and false roots as well. At the same time, the Chinese were indicating negative numbers by drawing a diagonal stroke through the right-most non-zero digit of the corresponding positive number's numeral. The first use of negative numbers in a European work was by Nicolas Chuquet during the 15th century. He used them as exponents, but referred to them as "absurd numbers". As recently as the 18th century, it was common practice to ignore any negative results returned by equations on the assumption that they were meaningless. Rational numbers It is likely that the concept of fractional numbers dates to prehistoric times. The Ancient Egyptians used their Egyptian fraction notation for rational numbers in mathematical texts such as the Rhind Mathematical Papyrus and the Kahun Papyrus. Classical Greek and Indian mathematicians made studies of the theory of rational numbers, as part of the general study of number theory. The best known of these is Euclid's Elements, dating to roughly 300 BC. Of the Indian texts, the most relevant is the Sthananga Sutra, which also covers number theory as part of a general study of mathematics. The concept of decimal fractions is closely linked with decimal place-value notation; the two seem to have developed in tandem. For example, it is common for the Jain math sutra to include calculations of decimal-fraction approximations to pi or the square root of 2. Similarly, Babylonian math texts used sexagesimal (base 60) fractions with great frequency. Irrational numbers The earliest known use of irrational numbers was in the Indian Sulba Sutras composed between 800 and 500 BC. The first existence proofs of irrational numbers is usually attributed to Pythagoras, more specifically to the Pythagorean Hippasus of Metapontum, who produced a (most likely geometrical) proof of the irrationality of the square root of 2. The story goes that Hippasus discovered irrational numbers when trying to represent the square root of 2 as a fraction. However, Pythagoras believed in the absoluteness of numbers, and could not accept the existence of irrational numbers. He could not disprove their existence through logic, but he could not accept irrational numbers, and so, allegedly and frequently reported, he sentenced Hippasus to death by drowning, to impede spreading of this disconcerting news. The 16th century brought final European acceptance of negative integral and fractional numbers. By the 17th century, mathematicians generally used decimal fractions with modern notation. It was not, however, until the 19th century that mathematicians separated irrationals into algebraic and transcendental parts, and once more undertook the scientific study of irrationals. It had remained almost dormant since Euclid. In 1872, the publication of the theories of Karl Weierstrass (by his pupil E. Kossak), Eduard Heine, Georg Cantor, and Richard Dedekind was brought about. In 1869, Charles Méray had taken the same point of departure as Heine, but the theory is generally referred to the year 1872. Weierstrass's method was completely set forth by Salvatore Pincherle (1880), and Dedekind's has received additional prominence through the author's later work (1888) and endorsement by Paul Tannery (1894). Weierstrass, Cantor, and Heine base their theories on infinite series, while Dedekind founds his on the idea of a cut (Schnitt) in the system of real numbers, separating all rational numbers into two groups having certain characteristic properties. The subject has received later contributions at the hands of Weierstrass, Kronecker, and Méray. The search for roots of quintic and higher degree equations was an important development, the Abel–Ruffini theorem (Ruffini 1799, Abel 1824) showed that they could not be solved by radicals (formulas involving only arithmetical operations and roots). Hence it was necessary to consider the wider set of algebraic numbers (all solutions to polynomial equations). Galois (1832) linked polynomial equations to group theory giving rise to the field of Galois theory. Simple continued fractions, closely related to irrational numbers (and due to Cataldi, 1613), received attention at the hands of Euler, and at the opening of the 19th century were brought into prominence through the writings of Joseph Louis Lagrange. Other noteworthy contributions have been made by Druckenmüller (1837), Kunze (1857), Lemke (1870), and Günther (1872). Ramus first connected the subject with determinants, resulting, with the subsequent contributions of Heine, Möbius, and Günther, in the theory of . Transcendental numbers and reals The existence of transcendental numbers was first established by Liouville (1844, 1851). Hermite proved in 1873 that e is transcendental and Lindemann proved in 1882 that π is transcendental. Finally, Cantor showed that the set of all real numbers is uncountably infinite but the set of all algebraic numbers is countably infinite, so there is an uncountably infinite number of transcendental numbers. Infinity and infinitesimals The earliest known conception of mathematical infinity appears in the Yajur Veda, an ancient Indian script, which at one point states, "If you remove a part from infinity or add a part to infinity, still what remains is infinity." Infinity was a popular topic of philosophical study among the Jain mathematicians c. 400 BC. They distinguished between five types of infinity: infinite in one and two directions, infinite in area, infinite everywhere, and infinite perpetually. The symbol is often used to represent an infinite quantity. Aristotle defined the traditional Western notion of mathematical infinity. He distinguished between actual infinity and potential infinity—the general consensus being that only the latter had true value. Galileo Galilei's Two New Sciences discussed the idea of one-to-one correspondences between infinite sets. But the next major advance in the theory was made by Georg Cantor; in 1895 he published a book about his new set theory, introducing, among other things, transfinite numbers and formulating the continuum hypothesis. In the 1960s, Abraham Robinson showed how infinitely large and infinitesimal numbers can be rigorously defined and used to develop the field of nonstandard analysis. The system of hyperreal numbers represents a rigorous method of treating the ideas about infinite and infinitesimal numbers that had been used casually by mathematicians, scientists, and engineers ever since the invention of infinitesimal calculus by Newton and Leibniz. A modern geometrical version of infinity is given by projective geometry, which introduces "ideal points at infinity", one for each spatial direction. Each family of parallel lines in a given direction is postulated to converge to the corresponding ideal point. This is closely related to the idea of vanishing points in perspective drawing. Complex numbers The earliest fleeting reference to square roots of negative numbers occurred in the work of the mathematician and inventor Heron of Alexandria in the , when he considered the volume of an impossible frustum of a pyramid. They became more prominent when in the 16th century closed formulas for the roots of third and fourth degree polynomials were discovered by Italian mathematicians such as Niccolò Fontana Tartaglia and Gerolamo Cardano. It was soon realized that these formulas, even if one was only interested in real solutions, sometimes required the manipulation of square roots of negative numbers. This was doubly unsettling since they did not even consider negative numbers to be on firm ground at the time. When René Descartes coined the term "imaginary" for these quantities in 1637, he intended it as derogatory. (See imaginary number for a discussion of the "reality" of complex numbers.) A further source of confusion was that the equation seemed capriciously inconsistent with the algebraic identity which is valid for positive real numbers a and b, and was also used in complex number calculations with one of a, b positive and the other negative. The incorrect use of this identity, and the related identity in the case when both a and b are negative even bedeviled Euler. This difficulty eventually led him to the convention of using the special symbol i in place of to guard against this mistake. The 18th century saw the work of Abraham de Moivre and Leonhard Euler. De Moivre's formula (1730) states: while Euler's formula of complex analysis (1748) gave us: The existence of complex numbers was not completely accepted until Caspar Wessel described the geometrical interpretation in 1799. Carl Friedrich Gauss rediscovered and popularized it several years later, and as a result the theory of complex numbers received a notable expansion. The idea of the graphic representation of complex numbers had appeared, however, as early as 1685, in Wallis's De algebra tractatus. In the same year, Gauss provided the first generally accepted proof of the fundamental theorem of algebra, showing that every polynomial over the complex numbers has a full set of solutions in that realm. Gauss studied complex numbers of the form , where a and b are integers (now called Gaussian integers) or rational numbers. His student, Gotthold Eisenstein, studied the type , where ω is a complex root of (now called Eisenstein integers). Other such classes (called cyclotomic fields) of complex numbers derive from the roots of unity for higher values of k. This generalization is largely due to Ernst Kummer, who also invented ideal numbers, which were expressed as geometrical entities by Felix Klein in 1893. In 1850 Victor Alexandre Puiseux took the key step of distinguishing between poles and branch points, and introduced the concept of essential singular points. This eventually led to the concept of the extended complex plane. Prime numbers Prime numbers have been studied throughout recorded history. They are positive integers that are divisible only by 1 and themselves. Euclid devoted one book of the Elements to the theory of primes; in it he proved the infinitude of the primes and the fundamental theorem of arithmetic, and presented the Euclidean algorithm for finding the greatest common divisor of two numbers. In 240 BC, Eratosthenes used the Sieve of Eratosthenes to quickly isolate prime numbers. But most further development of the theory of primes in Europe dates to the Renaissance and later eras. In 1796, Adrien-Marie Legendre conjectured the prime number theorem, describing the asymptotic distribution of primes. Other results concerning the distribution of the primes include Euler's proof that the sum of the reciprocals of the primes diverges, and the Goldbach conjecture, which claims that any sufficiently large even number is the sum of two primes. Yet another conjecture related to the distribution of prime numbers is the Riemann hypothesis, formulated by Bernhard Riemann in 1859. The prime number theorem was finally proved by Jacques Hadamard and Charles de la Vallée-Poussin in 1896. Goldbach and Riemann's conjectures remain unproven and unrefuted. Main classification Numbers can be classified into sets, called number sets or number systems, such as the natural numbers and the real numbers. The main number systems are as follows: Each of these number systems is a subset of the next one. So, for example, a rational number is also a real number, and every real number is also a complex number. This can be expressed symbolically as . A more complete list of number sets appears in the following diagram. Natural numbers The most familiar numbers are the natural numbers (sometimes called whole numbers or counting numbers): 1, 2, 3, and so on. Traditionally, the sequence of natural numbers started with 1 (0 was not even considered a number for the Ancient Greeks.) However, in the 19th century, set theorists and other mathematicians started including 0 (cardinality of the empty set, i.e. 0 elements, where 0 is thus the smallest cardinal number) in the set of natural numbers. Today, different mathematicians use the term to describe both sets, including 0 or not. The mathematical symbol for the set of all natural numbers is N, also written , and sometimes or when it is necessary to indicate whether the set should start with 0 or 1, respectively. In the base 10 numeral system, in almost universal use today for mathematical operations, the symbols for natural numbers are written using ten digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. The radix or base is the number of unique numerical digits, including zero, that a numeral system uses to represent numbers (for the decimal system, the radix is 10). In this base 10 system, the rightmost digit of a natural number has a place value of 1, and every other digit has a place value ten times that of the place value of the digit to its right. In set theory, which is capable of acting as an axiomatic foundation for modern mathematics, natural numbers can be represented by classes of equivalent sets. For instance, the number 3 can be represented as the class of all sets that have exactly three elements. Alternatively, in Peano Arithmetic, the number 3 is represented as sss0, where s is the "successor" function (i.e., 3 is the third successor of 0). Many different representations are possible; all that is needed to formally represent 3 is to inscribe a certain symbol or pattern of symbols three times. Integers The negative of a positive integer is defined as a number that produces 0 when it is added to the corresponding positive integer. Negative numbers are usually written with a negative sign (a minus sign). As an example, the negative of 7 is written −7, and . When the set of negative numbers is combined with the set of natural numbers (including 0), the result is defined as the set of integers, Z also written . Here the letter Z comes . The set of integers forms a ring with the operations addition and multiplication. The natural numbers form a subset of the integers. As there is no common standard for the inclusion or not of zero in the natural numbers, the natural numbers without zero are commonly referred to as positive integers, and the natural numbers with zero are referred to as non-negative integers. Rational numbers A rational number is a number that can be expressed as a fraction with an integer numerator and a positive integer denominator. Negative denominators are allowed, but are commonly avoided, as every rational number is equal to a fraction with positive denominator. Fractions are written as two integers, the numerator and the denominator, with a dividing bar between them. The fraction represents m parts of a whole divided into n equal parts. Two different fractions may correspond to the same rational number; for example and are equal, that is: In general, if and only if If the absolute value of m is greater than n (supposed to be positive), then the absolute value of the fraction is greater than 1. Fractions can be greater than, less than, or equal to 1 and can also be positive, negative, or 0. The set of all rational numbers includes the integers since every integer can be written as a fraction with denominator 1. For example −7 can be written . The symbol for the rational numbers is Q (for quotient), also written . Real numbers The symbol for the real numbers is R, also written as They include all the measuring numbers. Every real number corresponds to a point on the number line. The following paragraph will focus primarily on positive real numbers. The treatment of negative real numbers is according to the general rules of arithmetic and their denotation is simply prefixing the corresponding positive numeral by a minus sign, e.g. −123.456. Most real numbers can only be approximated by decimal numerals, in which a decimal point is placed to the right of the digit with place value 1. Each digit to the right of the decimal point has a place value one-tenth of the place value of the digit to its left. For example, 123.456 represents , or, in words, one hundred, two tens, three ones, four tenths, five hundredths, and six thousandths. A real number can be expressed by a finite number of decimal digits only if it is rational and its fractional part has a denominator whose prime factors are 2 or 5 or both, because these are the prime factors of 10, the base of the decimal system. Thus, for example, one half is 0.5, one fifth is 0.2, one-tenth is 0.1, and one fiftieth is 0.02. Representing other real numbers as decimals would require an infinite sequence of digits to the right of the decimal point. If this infinite sequence of digits follows a pattern, it can be written with an ellipsis or another notation that indicates the repeating pattern. Such a decimal is called a repeating decimal. Thus can be written as 0.333..., with an ellipsis to indicate that the pattern continues. Forever repeating 3s are also written as 0.. It turns out that these repeating decimals (including the repetition of zeroes) denote exactly the rational numbers, i.e., all rational numbers are also real numbers, but it is not the case that every real number is rational. A real number that is not rational is called irrational. A famous irrational real number is the , the ratio of the circumference of any circle to its diameter. When pi is written as as it sometimes is, the ellipsis does not mean that the decimals repeat (they do not), but rather that there is no end to them. It has been proved that is irrational. Another well-known number, proven to be an irrational real number, is the square root of 2, that is, the unique positive real number whose square is 2. Both these numbers have been approximated (by computer) to trillions of digits. Not only these prominent examples but almost all real numbers are irrational and therefore have no repeating patterns and hence no corresponding decimal numeral. They can only be approximated by decimal numerals, denoting rounded or truncated real numbers. Any rounded or truncated number is necessarily a rational number, of which there are only countably many. All measurements are, by their nature, approximations, and always have a margin of error. Thus 123.456 is considered an approximation of any real number greater or equal to and strictly less than (rounding to 3 decimals), or of any real number greater or equal to and strictly less than (truncation after the 3. decimal). Digits that suggest a greater accuracy than the measurement itself does, should be removed. The remaining digits are then called significant digits. For example, measurements with a ruler can seldom be made without a margin of error of at least 0.001 m. If the sides of a rectangle are measured as 1.23 m and 4.56 m, then multiplication gives an area for the rectangle between and . Since not even the second digit after the decimal place is preserved, the following digits are not significant. Therefore, the result is usually rounded to 5.61. Just as the same fraction can be written in more than one way, the same real number may have more than one decimal representation. For example, 0.999..., 1.0, 1.00, 1.000, ..., all represent the natural number 1. A given real number has only the following decimal representations: an approximation to some finite number of decimal places, an approximation in which a pattern is established that continues for an unlimited number of decimal places or an exact value with only finitely many decimal places. In this last case, the last non-zero digit may be replaced by the digit one smaller followed by an unlimited number of 9s, or the last non-zero digit may be followed by an unlimited number of zeros. Thus the exact real number 3.74 can also be written 3.7399999999... and 3.74000000000.... Similarly, a decimal numeral with an unlimited number of 0s can be rewritten by dropping the 0s to the right of the rightmost nonzero digit, and a decimal numeral with an unlimited number of 9s can be rewritten by increasing by one the rightmost digit less than 9, and changing all the 9s to the right of that digit to 0s. Finally, an unlimited sequence of 0s to the right of a decimal place can be dropped. For example, 6.849999999999... = 6.85 and 6.850000000000... = 6.85. Finally, if all of the digits in a numeral are 0, the number is 0, and if all of the digits in a numeral are an unending string of 9s, you can drop the nines to the right of the decimal place, and add one to the string of 9s to the left of the decimal place. For example, 99.999... = 100. The real numbers also have an important but highly technical property called the least upper bound property. It can be shown that any ordered field, which is also complete, is isomorphic to the real numbers. The real numbers are not, however, an algebraically closed field, because they do not include a solution (often called a square root of minus one) to the algebraic equation . Complex numbers Moving to a greater level of abstraction, the real numbers can be extended to the complex numbers. This set of numbers arose historically from trying to find closed formulas for the roots of cubic and quadratic polynomials. This led to expressions involving the square roots of negative numbers, and eventually to the definition of a new number: a square root of −1, denoted by i, a symbol assigned by Leonhard Euler, and called the imaginary unit. The complex numbers consist of all numbers of the form where a and b are real numbers. Because of this, complex numbers correspond to points on the complex plane, a vector space of two real dimensions. In the expression , the real number a is called the real part and b is called the imaginary part. If the real part of a complex number is 0, then the number is called an imaginary number or is referred to as purely imaginary; if the imaginary part is 0, then the number is a real number. Thus the real numbers are a subset of the complex numbers. If the real and imaginary parts of a complex number are both integers, then the number is called a Gaussian integer. The symbol for the complex numbers is C or . The fundamental theorem of algebra asserts that the complex numbers form an algebraically closed field, meaning that every polynomial with complex coefficients has a root in the complex numbers. Like the reals, the complex numbers form a field, which is complete, but unlike the real numbers, it is not ordered. That is, there is no consistent meaning assignable to saying that i is greater than 1, nor is there any meaning in saying that i is less than 1. In technical terms, the complex numbers lack a total order that is compatible with field operations. Subclasses of the integers Even and odd numbers An even number is an integer that is "evenly divisible" by two, that is divisible by two without remainder; an odd number is an integer that is not even. (The old-fashioned term "evenly divisible" is now almost always shortened to "divisible".) Any odd number n may be constructed by the formula for a suitable integer k. Starting with the first non-negative odd numbers are {1, 3, 5, 7, ...}. Any even number m has the form where k is again an integer. Similarly, the first non-negative even numbers are {0, 2, 4, 6, ...}. Prime numbers A prime number, often shortened to just prime, is an integer greater than 1 that is not the product of two smaller positive integers. The first few prime numbers are 2, 3, 5, 7, and 11. There is no such simple formula as for odd and even numbers to generate the prime numbers. The primes have been widely studied for more than 2000 years and have led to many questions, only some of which have been answered. The study of these questions belongs to number theory. Goldbach's conjecture is an example of a still unanswered question: "Is every even number the sum of two primes?" One answered question, as to whether every integer greater than one is a product of primes in only one way, except for a rearrangement of the primes, was confirmed; this proven claim is called the fundamental theorem of arithmetic. A proof appears in Euclid's Elements. Other classes of integers Many subsets of the natural numbers have been the subject of specific studies and have been named, often after the first mathematician that has studied them. Example of such sets of integers are Fibonacci numbers and perfect numbers. For more examples, see Integer sequence. Subclasses of the complex numbers Algebraic, irrational and transcendental numbers Algebraic numbers are those that are a solution to a polynomial equation with integer coefficients. Real numbers that are not rational numbers are called irrational numbers. Complex numbers which are not algebraic are called transcendental numbers. The algebraic numbers that are solutions of a monic polynomial equation with integer coefficients are called algebraic integers. Periods and exponential periods A period is a complex number that can be expressed as an integral of an algebraic function over an algebraic domain. The periods are a class of numbers which includes, alongside the algebraic numbers, many well known mathematical constants such as the number π. The set of periods form a countable ring and bridge the gap between algebraic and transcendental numbers. The periods can be extended by permitting the integrand to be the product of an algebraic function and the exponential of an algebraic function. This gives another countable ring: the exponential periods. The number e as well as Euler's constant are exponential periods. Constructible numbers Motivated by the classical problems of constructions with straightedge and compass, the constructible numbers are those complex numbers whose real and imaginary parts can be constructed using straightedge and compass, starting from a given segment of unit length, in a finite number of steps. Computable numbers A computable number, also known as recursive number, is a real number such that there exists an algorithm which, given a positive number n as input, produces the first n digits of the computable number's decimal representation. Equivalent definitions can be given using μ-recursive functions, Turing machines or λ-calculus. The computable numbers are stable for all usual arithmetic operations, including the computation of the roots of a polynomial, and thus form a real closed field that contains the real algebraic numbers. The computable numbers may be viewed as the real numbers that may be exactly represented in a computer: a computable number is exactly represented by its first digits and a program for computing further digits. However, the computable numbers are rarely used in practice. One reason is that there is no algorithm for testing the equality of two computable numbers. More precisely, there cannot exist any algorithm which takes any computable number as an input, and decides in every case if this number is equal to zero or not. The set of computable numbers has the same cardinality as the natural numbers. Therefore, almost all real numbers are non-computable. However, it is very difficult to produce explicitly a real number that is not computable. Extensions of the concept p-adic numbers The p-adic numbers may have infinitely long expansions to the left of the decimal point, in the same way that real numbers may have infinitely long expansions to the right. The number system that results depends on what base is used for the digits: any base is possible, but a prime number base provides the best mathematical properties. The set of the p-adic numbers contains the rational numbers, but is not contained in the complex numbers. The elements of an algebraic function field over a finite field and algebraic numbers have many similar properties (see Function field analogy). Therefore, they are often regarded as numbers by number theorists. The p-adic numbers play an important role in this analogy. Hypercomplex numbers Some number systems that are not included in the complex numbers may be constructed from the real numbers in a way that generalize the construction of the complex numbers. They are sometimes called hypercomplex numbers. They include the quaternions , introduced by Sir William Rowan Hamilton, in which multiplication is not commutative, the octonions , in which multiplication is not associative in addition to not being commutative, and the sedenions , in which multiplication is not alternative, neither associative nor commutative. The hypercomplex numbers include one real unit together with imaginary units, for which n is a non-negative integer. For example, quaternions can generally represented using the form where the coefficients , , , are real numbers, and , are 3 different imaginary units. Each hypercomplex number system is a subset of the next hypercomplex number system of double dimensions obtained via the Cayley–Dickson construction. For example, the 4-dimensional quaternions are a subset of the 8-dimensional quaternions , which are in turn a subset of the 16-dimensional sedenions , in turn a subset of the 32-dimensional trigintaduonions , and ad infinitum with dimensions, with n being any non-negative integer. Including the complex and real numbers and their subsets, this can be expressed symbolically as: Alternatively, starting from the real numbers , which have zero complex units, this can be expressed as with containing dimensions. Transfinite numbers For dealing with infinite sets, the natural numbers have been generalized to the ordinal numbers and to the cardinal numbers. The former gives the ordering of the set, while the latter gives its size. For finite sets, both ordinal and cardinal numbers are identified with the natural numbers. In the infinite case, many ordinal numbers correspond to the same cardinal number. Nonstandard numbers Hyperreal numbers are used in non-standard analysis. The hyperreals, or nonstandard reals (usually denoted as *R), denote an ordered field that is a proper extension of the ordered field of real numbers R and satisfies the transfer principle. This principle allows true first-order statements about R to be reinterpreted as true first-order statements about *R. Superreal and surreal numbers extend the real numbers by adding infinitesimally small numbers and infinitely large numbers, but still form fields.
Mathematics
Mathematics
null
21710
https://en.wikipedia.org/wiki/Nerve%20agent
Nerve agent
Nerve agents, sometimes also called nerve gases, are a class of organic chemicals that disrupt the mechanisms by which nerves transfer messages to organs. The disruption is caused by the blocking of acetylcholinesterase (AChE), an enzyme that catalyzes the breakdown of acetylcholine, a neurotransmitter. Nerve agents are irreversible acetylcholinesterase inhibitors used as poison. Poisoning by a nerve agent leads to constriction of pupils, profuse salivation, convulsions, and involuntary urination and defecation, with the first symptoms appearing in seconds after exposure. Death by asphyxiation or cardiac arrest may follow in minutes due to the loss of the body's control over respiratory and other muscles. Some nerve agents are readily vaporized or aerosolized, and the primary portal of entry into the body is the respiratory system. Nervous agents can also be absorbed through the skin, requiring that those likely to be subjected to such agents wear a full body suit in addition to a respirator. Nerve agents are generally colorless and tasteless liquids. Nerve agents evaporate at varying rates depending on the substance. None are gases in normal environments. The popular term "nerve gas" is inaccurate. Agents Sarin and VX are odorless; Tabun has a slightly fruity odor and Soman has a slight camphor odor. Biological effects Nerve agents attack the nervous system. All such agents function the same way resulting in cholinergic crisis: they inhibit the enzyme acetylcholinesterase, which is responsible for the breakdown of acetylcholine (ACh) in the synapses between nerves that control whether muscle tissues are to relax or contract. If the agent cannot be broken down, muscles are prevented from receiving 'relax' signals and they are effectively paralyzed. It is the compounding of this paralysis throughout the body that quickly leads to more severe complications, including the heart and the muscles used for breathing. Because of this, the first symptoms usually appear within 30 seconds of exposure and death can occur via asphyxiation or cardiac arrest in a few minutes, depending upon the dose received and the agent used. Initial symptoms following exposure to nerve agents (like Sarin) are a runny nose, tightness in the chest, and constriction of the pupils. Soon after, the victim will have difficulty breathing and will experience nausea and salivation. As the victim continues to lose control of bodily functions, involuntary salivation, lacrimation, urination, defecation, gastrointestinal pain and vomiting will be experienced. Blisters and burning of the eyes and/or lungs may also occur. This phase is followed by initially myoclonic jerks (muscle jerks) followed by status epilepticus–type epileptic seizure. Death then comes via complete respiratory depression, most likely via the excessive peripheral activity at the neuromuscular junction of the diaphragm. The effects of nerve agents are long lasting and increase with continued exposure. Survivors of nerve agent poisoning almost invariably develop chronic neurological damage and related psychiatric effects. Possible effects that can last at least up to two–three years after exposure include blurred vision, tiredness, declined memory, hoarse voice, palpitations, sleeplessness, shoulder stiffness and eye strain. In people exposed to nerve agents, serum and erythrocyte acetylcholinesterase in the long-term are noticeably lower than normal and tend to be lower the worse the persisting symptoms are. Mechanism of action When a normally functioning motor nerve is stimulated, it releases the neurotransmitter acetylcholine, which transmits the impulse to a muscle or organ. Once the impulse is sent, the enzyme acetylcholinesterase immediately breaks down the acetylcholine in order to allow the muscle or organ to relax. Nerve agents disrupt the nervous system by inhibiting the function of the enzyme acetylcholinesterase by forming a covalent bond with its active site, where acetylcholine would normally be broken down (undergo hydrolysis). Acetylcholine thus builds up and continues to act so that any nerve impulses are continually transmitted and muscle contractions do not stop. This same action also occurs at the gland and organ levels, resulting in uncontrolled drooling, tearing of the eyes (lacrimation) and excess production of mucus from the nose (rhinorrhea). The reaction product of the most important nerve agents, including Soman, Sarin, Tabun and VX, with acetylcholinesterase were solved by the U.S. Army using X-ray crystallography in the 1990s. The reaction products have been confirmed subsequently using different sources of acetylcholinesterase and the closely related target enzyme, butyrylcholinesterase. The X-ray structures clarify important aspects of the reaction mechanism (e.g., stereochemical inversion) at atomic resolution and provide a key tool for antidote development. Treatment Standard treatment for nerve agent poisoning is a combination of an anticholinergic to manage the symptoms, and an oxime as an antidote. Anticholinergics treat the symptoms by reducing the effects of acetylcholine, while oximes displaces phosphate molecules from the active site of the cholinesterase enzymes, allowing the breakdown of acetylcholine. Military personnel are issued the combination in an autoinjector (e.g. ATNAA), for ease of use in stressful conditions. Atropine is the standard anticholinergic drug used to manage the symptoms of nerve agent poisoning. It acts as an antagonist to muscarinic acetylcholine receptors, blocking the effects of excess acetylcholine. Some synthetic anticholinergics, such as biperiden, may counteract the central symptoms of nerve agent poisoning more effectively than atropine, since they pass the blood–brain barrier better. While these drugs will save the life of a person affected by nerve agents, that person may be incapacitated briefly or for an extended period, depending on the extent of exposure. The endpoint of atropine administration is the clearing of bronchial secretions. Pralidoxime chloride (also known as 2-PAMCl) is the standard oxime used to treat nerve agent poisoning. Rather than counteracting the initial effects of the nerve agent on the nervous system as does atropine, pralidoxime chloride reactivates the poisoned enzyme (acetylcholinesterase) by scavenging the phosphoryl group attached on the functional hydroxyl group of the enzyme, counteracting the nerve agent itself. Revival of acetylcholinesterase with pralidoxime chloride works more effectively on nicotinic receptors while blocking acetylcholine receptors with atropine is more effective on muscarinic receptors. Anticonvulsants, such as diazepam, may be administered to manage seizures, improving long term prognosis and reducing risk of brain damage. This is not usually self-administered as its use is for actively seizing patients. Countermeasures Pyridostigmine bromide was used by the US military in the first Gulf War as a pretreatment for Soman as it increased the median lethal dose. It is only effective if taken prior to exposure and in conjunction with Atropine and Pralidoxime, issued in the Mark I NAAK autoinjector, and is ineffective against other nerve agents. While it reduces fatality rates, there is an increased risk of brain damage; this can be mitigated by administration of an anticonvulsant. Evidence suggests that the use of pyridostigmine may be responsible for some of the symptoms of Gulf War syndrome. Butyrylcholinesterase is under development by the U.S. Department of Defense as a prophylactic countermeasure against organophosphate nerve agents. It binds nerve agent in the bloodstream before the poison can exert effects in the nervous system. Both purified acetylcholinesterase and butyrylcholinesterase have demonstrated success in animal studies as "biological scavengers" (and universal targets) to provide stoichiometric protection against the entire spectrum of organophosphate nerve agents. Butyrylcholinesterase currently is the preferred enzyme for development as a pharmaceutical drug primarily because it is a naturally circulating human plasma protein (superior pharmacokinetics) and its larger active site compared with acetylcholinesterase may permit greater flexibility for future design and improvement of butyrylcholinesterase to act as a nerve agent scavenger. Classes There are two main classes of nerve agents. The members of the two classes share similar properties and are given both a common name (such as Sarin) and a two-character NATO identifier (such as GB). G-series The G-series is thus named because German scientists first synthesized them. G series agents are known as non-persistent, meaning that they evaporate shortly after release, and do not remain active in the dispersal area for very long. All of the compounds in this class were discovered and synthesized during or prior to World War II, led by Gerhard Schrader (later under the employment of IG Farben). This series is the first and oldest family of nerve agents. The first nerve agent ever synthesized was GA (Tabun) in 1936. GB (Sarin) was discovered next in 1939, followed by GD (Soman) in 1944, and finally the more obscure GF (Cyclosarin) in 1949. GB was the only G agent that was fielded by the US as a munition, in rockets, aerial bombs, and artillery shells. V-series The V-series is the second family of nerve agents and contains five well known members: VE, VG, VM, VR, and VX, along with several more obscure analogues. The most studied agent in this family, VX (it is thought that the 'X' in its name comes from its overlapping isopropyl radicals), was invented in the 1950s at Porton Down in Wiltshire, England. Ranajit Ghosh, a chemist at the Plant Protection Laboratories of Imperial Chemical Industries (ICI) was investigating a class of organophosphate compounds (organophosphate esters of substituted aminoethanethiols). Like Schrader, Ghosh found that they were quite effective pesticides. In 1954, ICI put one of them on the market under the trade name Amiton. It was subsequently withdrawn, as it was too toxic for safe use. The toxicity did not escape military notice and some of the more toxic materials had been sent to Porton Down for evaluation. After the evaluation was complete, several members of this class of compounds became a new group of nerve agents, the V agents (depending on the source, the V stands for Victory, Venomous, or Viscous). The best known of these is probably VX, with VR ("Russian V-gas") coming a close second (Amiton is largely forgotten as VG, with G probably coming from "G"hosh). All of the V-agents are persistent agents, meaning that these agents do not degrade or wash away easily and can therefore remain on clothes and other surfaces for long periods. In use, this allows the V-agents to be used to blanket terrain to guide or curtail the movement of enemy ground forces. The consistency of these agents is similar to oil; as a result, the contact hazard for V-agents is primarily – but not exclusively – dermal. VX was the only V-series agent that was fielded by the US as a munition, in rockets, artillery shells, airplane spray tanks, and landmines. Analyzing the structure of thirteen V agents, the standard composition, which makes a compound enter this group, is the absence of halides. It is clear that many agricultural pesticides can be considered as V agents if they are notoriously toxic. The agent is not required to be a phosphonate and presents a dialkylaminoethyl group. The toxicity requirement is waived as the VT agent and its salts (VT-1 and VT-2) are "non-toxic". Replacing the sulfur atom with selenium increases the toxicity of the agent by orders of magnitude. Novichok agents The Novichok (Russian: , "newcomer") agents, a series of organophosphate compounds, were developed in the Soviet Union and in Russia from the mid-1960s to the 1990s. The Novichok program aimed to develop and manufacture highly deadly chemical weapons that were unknown to the West. The new agents were designed to be undetectable by standard NATO chemical-detection equipment and overcome contemporary chemical-protective equipment. In addition to the newly developed "third generation" weapons, binary versions of several Soviet agents were developed and were designated as "Novichok" agents. Carbamates Contrary to some claims, not all nerve agents are organophosphates. The starting compound studied by the United States was the carbamate EA-1464, of notorious toxicity. Compounds similar in structure and effect to EA-1464 formed a large group, including compounds such as EA-3990 and EA-4056. The Family Practice Notebook claims carbamate-based nerve agents can be three times as toxic as VX. Both the United States and the Soviet Union developed carbamate-based nerve agents during the Cold War. Carbamate-based nerve agents are sometimes grouped in academic literature with Fourth Generation Novichok agents, as they were added to the CWC schedule on banned agents at the same time, despite their significant differences in chemical makeup and mechanisms of action. Carbamate-based nerve agents have been identified as Schedule 1 Nerve Agents, the highest classification possible under the CWC, reserved for agents with no identified alternate use, and those that can cause the most harm. Insecticides Some insecticides, including carbamates and organophosphates such as dichlorvos, malathion and parathion, are nerve agents. The metabolism of insects is sufficiently different from mammals that these compounds have little effect on humans and other mammals at proper doses, but there is considerable concern about the effects of long-term exposure to these chemicals by farm workers and animals alike. At high enough doses, acute toxicity and death can occur through the same mechanism as other nerve agents. Some insecticides such as demeton, dimefox and paraoxon are sufficiently toxic to humans that they have been withdrawn from agricultural use, and were at one stage investigated for potential military applications. Paraoxon was allegedly used as an assassination weapon by the apartheid South African government as part of Project Coast. Organophosphate pesticide poisoning is a major cause of disability in many developing countries and is often the preferred method of suicide. Methods of dissemination Many methods exist for spreading nerve agents such as: uncontrolled aerosol munitions smoke generation explosive dissemination atomizers, humidifiers and foggers The method chosen will depend on the physical properties of the nerve agent(s) used, the nature of the target, and the achievable level of sophistication. History Discovery This first class of nerve agents, the G-series, was accidentally discovered in Germany on December 23, 1936, by a research team headed by Gerhard Schrader working for IG Farben. Since 1934, Schrader had been working in a laboratory in Leverkusen to develop new types of insecticides for IG Farben. While working toward his goal of improved insecticide, Schrader experimented with numerous compounds, eventually leading to the preparation of Tabun. In experiments, Tabun was extremely potent against insects: as little as 5 ppm of Tabun killed all the leaf lice he used in his initial experiment. In January 1937, Schrader observed the effects of nerve agents on human beings first-hand when a drop of Tabun spilled onto a lab bench. Within minutes he and his laboratory assistant began to experience miosis (constriction of the pupils of the eyes), dizziness and severe shortness of breath. It took them three weeks to recover fully. In 1935 the Nazi government had passed a decree that required all inventions of possible military significance to be reported to the Ministry of War, so in May 1937 Schrader sent a sample of Tabun to the chemical warfare (CW) section of the Army Weapons Office in Berlin-Spandau. Schrader was summoned to the Wehrmacht chemical lab in Berlin to give a demonstration, after which Schrader's patent application and all related research was classified as secret. Colonel Rüdiger, head of the CW section, ordered the construction of new laboratories for the further investigation of Tabun and other organophosphate compounds and Schrader soon moved to a new laboratory at Wuppertal-Elberfeld in the Ruhr valley to continue his research in secret throughout World War II. The compound was initially codenamed Le-100 and later Trilon-83. Sarin was discovered by Schrader and his team in 1938 and named in honor of its discoverers: Gerhard Schrader, Otto Ambros, , and Hans-Jürgen von der Linde. It was codenamed T-144 or Trilon-46. It was found to be more than ten times as potent as Tabun. Soman was discovered by Richard Kuhn in 1944 as he worked with the existing compounds; the name is derived from either the Greek 'to sleep' or the Latin 'to bludgeon'. It was codenamed T-300. Cyclosarin was also discovered during WWII but the details were lost and it was rediscovered in 1949. The G-series naming system was created by the United States when it uncovered the German activities, labeling Tabun as GA (German Agent A), Sarin as GB and Soman as GD. Ethyl Sarin was tagged GE and CycloSarin as GF. During World War II In 1939, a pilot plant for Tabun production was set up at Munster-Lager, on Lüneburg Heath near the German Army proving grounds at . In January 1940, construction began on a secret plant, code named "Hochwerk" (High factory), for the production of Tabun at Dyhernfurth an der Oder (now Brzeg Dolny in Poland), on the Oder River from Breslau (now Wrocław) in Silesia. The plant was large, covering an area of and was completely self-contained, synthesizing all intermediates as well as the final product, Tabun. The factory even had an underground plant for filling munitions, which were then stored at Krappitz (now Krapkowice) in Upper Silesia. The plant was operated by , a subsidiary of IG Farben, as were all other chemical weapon agent production plants in Germany at the time. Because of the plant's deep secrecy and the difficult nature of the production process, it took from January 1940 until June 1942 for the plant to become fully operational. Many of Tabun's chemical precursors were so corrosive that reaction chambers not lined with quartz or silver soon became useless. Tabun itself was so hazardous that the final processes had to be performed while enclosed in double glass-lined chambers with a stream of pressurized air circulating between the walls. Three thousand German nationals were employed at Hochwerk, all equipped with respirators and clothing constructed of a poly-layered rubber/cloth/rubber sandwich that was destroyed after the tenth wearing. Despite all precautions, there were over 300 accidents before production even began and at least ten workers died during the two and a half years of operation. Some incidents cited in A Higher Form of Killing: The Secret History of Chemical and Biological Warfare are as follows: Four pipe fitters had liquid Tabun drain onto them and died before their rubber suits could be removed. A worker had two liters of Tabun pour down the neck of his rubber suit. He died within two minutes. Seven workers were hit in the face with a stream of Tabun of such force that the liquid was forced behind their respirators. Only two survived despite resuscitation measures. and moved, probably to Dzerzhinsk, USSR. In 1940 the German Army Weapons Office ordered the mass production of Sarin for wartime use. A number of pilot plants were built and a high-production facility was under construction (but was not finished) by the end of World War II. Estimates for total Sarin production by Nazi Germany range from 500 kg to 10 tons. During that time, German intelligence believed that the Allies also knew of these compounds, assuming that because these compounds were not discussed in the Allies' scientific journals information about them was being suppressed. Though Sarin, Tabun and Soman were incorporated into artillery shells, the German government ultimately decided not to use nerve agents against Allied targets. The Allies did not learn of these agents until shells filled with them were captured towards the end of the war. German forces used chemical warfare against partisans during the Battle of the Kerch Peninsula in 1942, but did not use any nerve agent. This is detailed in Joseph Borkin's book The Crime and Punishment of IG Farben: Post–World War II Since World War II, Iraq's use of mustard gas against Iranian troops and Kurds (Iran–Iraq War of 1980–1988) has been the only large-scale use of any chemical weapons. On the scale of the single Kurdish village of Halabja within its own territory, Iraqi forces did expose the populace to some kind of chemical weapons, possibly mustard gas and most likely nerve agents. Operatives of the Aum Shinrikyo religious group made and used Sarin several times on other Japanese, most notably the Tokyo subway sarin attack. In the Gulf War, no nerve agents (nor other chemical weapons) were used, but a number of U.S. and UK personnel were exposed to them when the Khamisiyah chemical depot was destroyed. This and the widespread use of anticholinergic drugs as a protective treatment against any possible nerve gas attack have been proposed as a possible cause of Gulf War syndrome. Sarin gas was deployed in a 2013 attack on Ghouta during the Syrian Civil War, killing several hundred people. Most governments contend that forces loyal to President Bashar al-Assad deployed the gas; however, the Syrian Government has denied responsibility. On 13 February 2017, the nerve agent VX was used in the assassination of Kim Jong-nam, half-brother of the North Korean leader Kim Jong-un, at Kuala Lumpur International Airport in Malaysia. On 4 March 2018, a former Russian agent (who was convicted of high treason but allowed to live in the United Kingdom via a spy swap agreement), Sergei Skripal, and his daughter, who was visiting from Moscow, were both poisoned by a Novichok nerve agent in the English city of Salisbury. They survived, and were subsequently released from hospital. In addition, a Wiltshire Police officer, Nick Bailey, was exposed to the substance. He was one of the first to respond to the incident. Twenty-one members of the public received medical treatment following exposure to the nerve agent. Despite this, only Bailey and the Skripals remained in critical condition. On 11 March 2018, Public Health England issued advice for the other people believed to have been in the Mill pub (the location where the attack is believed to have been carried out) or the nearby Zizzi Restaurant. On 12 March 2018, British Prime Minister Theresa May stated that the substance used was a Novichok nerve agent. On 30 June 2018, two British nationals, Charlie Rowley and Dawn Sturgess, were poisoned by a Novichok nerve agent of the same kind that was used in the Skripal poisoning, which Rowley had found in a discarded perfume bottle and gifted to Sturgess. Whilst Rowley survived, Sturgess died on 8 July. Metropolitan Police believe that the poisoning was not a targeted attack, but a result of the way the nerve agent was disposed of after the poisoning in Salisbury. Ocean disposal In 1972, the United States Congress banned the practice of disposing chemical weapons into the ocean. Thirty-two thousand tons of nerve and mustard agents had already been dumped into the ocean waters off the United States by the U.S. Army, primarily as part of Operation CHASE. According to a 1998 report by William Brankowitz, a deputy project manager in the U.S. Army Chemical Materials Agency, the Army created at least 26 chemical weapons dump sites in the ocean off at least 11 states on both the west and east coasts. Due to poor records, they currently only know the rough whereabouts of half of them. There is currently a lack of scientific data regarding the ecological and health effects of this dumping. In the event of leakage, many nerve agents are soluble in water and would dissolve in a few days, while other substances like sulfur mustard could last longer. There have also been a few incidents of chemical weapons washing ashore or being accidentally retrieved, for example during dredging or trawl fishing operations. Detection Detection of gaseous nerve agents The methods of detecting gaseous nerve agents include but are not limited to the following. Laser photoacoustic spectroscopy Laser photoacoustic spectroscopy (LPAS) is a method that has been used to detect nerve agents in the air. In this method, laser light is absorbed by gaseous matter. This causes a heating/cooling cycle and changes in pressure. Sensitive microphones convey sound waves that result from the pressure changes. Scientists at the U.S. Army Research Laboratory engineered an LPAS system that can detect multiple trace amounts of toxic gases in one air sample. This technology contained three lasers modulated to different frequency, each producing a different sound wave tone. The different wavelengths of light were directed into a sensor referred to as the photoacoustic cell. Within the cell were the vapors of different nerve agents. The traces of each nerve agent had a signature effect on the "loudness" of the lasers' sound wave tones. Some overlap of nerve agents' effects did occur in the acoustic results. However, it was predicted that specificity would increase as additional lasers with unique wavelengths were added. Yet, too many lasers set to different wavelengths could result in overlap of absorption spectra. Citation LPAS technology can identify gases in parts per billion (ppb) concentrations. The following nerve agent simulants have been identified with this multiwavelength LPAS: dimethyl methyl phosphonate (DMMP) diethyl methyl phosphonate (DEMP) diisopropyl methyl phosphonate (DIMP) dimethylpolysiloxane (DIME), triethyl phosphate (TEP) tributyl phosphate (TBP) two volatile organic compounds (VOCs) acetone (ACE) isopropanol (ISO), used to construct Sarin Other gases and air contaminants identified with LPAS include: CO2 Carbon dioxide Benzene Formaldehyde Acetaldehyde Ammonia NOx Nitrogen oxide SO2 Sulphur oxide Ethylene Glycol TATP TNT Non-dispersive infrared Non-dispersive infrared techniques have been reported to be used for gaseous nerve agent detection. IR absorption Traditional IR absorption has been reported to detect gaseous nerve agents. Fourier transform infrared spectroscopy Fourier transform infrared (FTIR) spectroscopy has been reported to detect gaseous nerve agents.
Technology
Weapon of mass destruction
null
21723
https://en.wikipedia.org/wiki/Nonlinear%20optics
Nonlinear optics
Nonlinear optics (NLO) is the branch of optics that describes the behaviour of light in nonlinear media, that is, media in which the polarization density P responds non-linearly to the electric field E of the light. The non-linearity is typically observed only at very high light intensities (when the electric field of the light is >108 V/m and thus comparable to the atomic electric field of ~1011 V/m) such as those provided by lasers. Above the Schwinger limit, the vacuum itself is expected to become nonlinear. In nonlinear optics, the superposition principle no longer holds. History The first nonlinear optical effect to be predicted was two-photon absorption, by Maria Goeppert Mayer for her PhD in 1931, but it remained an unexplored theoretical curiosity until 1961 and the almost simultaneous observation of two-photon absorption at Bell Labs and the discovery of second-harmonic generation by Peter Franken et al. at University of Michigan, both shortly after the construction of the first laser by Theodore Maiman. However, some nonlinear effects were discovered before the development of the laser. The theoretical basis for many nonlinear processes was first described in Bloembergen's monograph "Nonlinear Optics". Nonlinear optical processes Nonlinear optics explains nonlinear response of properties such as frequency, polarization, phase or path of incident light. These nonlinear interactions give rise to a host of optical phenomena: Frequency-mixing processes Second-harmonic generation (SHG), or frequency doubling, generation of light with a doubled frequency (half the wavelength), two photons are destroyed, creating a single photon at two times the frequency. Third-harmonic generation (THG), generation of light with a tripled frequency (one-third the wavelength), three photons are destroyed, creating a single photon at three times the frequency. High-harmonic generation (HHG), generation of light with frequencies much greater than the original (typically 100 to 1000 times greater). Sum-frequency generation (SFG), generation of light with a frequency that is the sum of two other frequencies (SHG is a special case of this). Difference-frequency generation (DFG), generation of light with a frequency that is the difference between two other frequencies. Optical parametric amplification (OPA), amplification of a signal input in the presence of a higher-frequency pump wave, at the same time generating an idler wave (can be considered as DFG). Optical parametric oscillation (OPO), generation of a signal and idler wave using a parametric amplifier in a resonator (with no signal input). Optical parametric generation (OPG), like parametric oscillation but without a resonator, using a very high gain instead. Half-harmonic generation, the special case of OPO or OPG when the signal and idler degenerate in one single frequency, Spontaneous parametric down-conversion (SPDC), the amplification of the vacuum fluctuations in the low-gain regime. Optical rectification (OR), generation of quasi-static electric fields. Nonlinear light-matter interaction with free electrons and plasmas. Other nonlinear processes Optical Kerr effect, intensity-dependent refractive index (a effect). Self-focusing, an effect due to the optical Kerr effect (and possibly higher-order nonlinearities) caused by the spatial variation in the intensity creating a spatial variation in the refractive index. Kerr-lens modelocking (KLM), the use of self-focusing as a mechanism to mode-lock laser. Self-phase modulation (SPM), an effect due to the optical Kerr effect (and possibly higher-order nonlinearities) caused by the temporal variation in the intensity creating a temporal variation in the refractive index. Optical solitons, an equilibrium solution for either an optical pulse (temporal soliton) or spatial mode (spatial soliton) that does not change during propagation due to a balance between dispersion and the Kerr effect (e.g. self-phase modulation for temporal and self-focusing for spatial solitons). Self-diffraction, splitting of beams in a multi-wave mixing process with potential energy transfer. Cross-phase modulation (XPM), where one wavelength of light can affect the phase of another wavelength of light through the optical Kerr effect. Four-wave mixing (FWM), can also arise from other nonlinearities. Cross-polarized wave generation (XPW), a effect in which a wave with polarization vector perpendicular to the input one is generated. Modulational instability. Raman amplification Optical phase conjugation. Stimulated Brillouin scattering, interaction of photons with acoustic phonons Multi-photon absorption, simultaneous absorption of two or more photons, transferring the energy to a single electron. Multiple photoionisation, near-simultaneous removal of many bound electrons by one photon. Chaos in optical systems. Related processes In these processes, the medium has a linear response to the light, but the properties of the medium are affected by other causes: Pockels effect, the refractive index is affected by a static electric field; used in electro-optic modulators. Acousto-optics, the refractive index is affected by acoustic waves (ultrasound); used in acousto-optic modulators. Raman scattering, interaction of photons with optical phonons. Parametric processes Nonlinear effects fall into two qualitatively different categories, parametric and non-parametric effects. A parametric non-linearity is an interaction in which the quantum state of the nonlinear material is not changed by the interaction with the optical field. As a consequence of this, the process is "instantaneous". Energy and momentum are conserved in the optical field, making phase matching important and polarization-dependent. Theory Parametric and "instantaneous" (i.e. material must be lossless and dispersionless through the Kramers–Kronig relations) nonlinear optical phenomena, in which the optical fields are not too large, can be described by a Taylor series expansion of the dielectric polarization density (electric dipole moment per unit volume) P(t) at time t in terms of the electric field E(t): where the coefficients χ(n) are the n-th-order susceptibilities of the medium, and the presence of such a term is generally referred to as an n-th-order nonlinearity. Note that the polarization density P(t) and electrical field E(t) are considered as scalar for simplicity. In general, χ(n) is an (n + 1)-th-rank tensor representing both the polarization-dependent nature of the parametric interaction and the symmetries (or lack) of the nonlinear material. Wave equation in a nonlinear material Central to the study of electromagnetic waves is the wave equation. Starting with Maxwell's equations in an isotropic space, containing no free charge, it can be shown that where PNL is the nonlinear part of the polarization density, and n is the refractive index, which comes from the linear term in P. Note that one can normally use the vector identity and Gauss's law (assuming no free charges, ), to obtain the more familiar wave equation For a nonlinear medium, Gauss's law does not imply that the identity is true in general, even for an isotropic medium. However, even when this term is not identically 0, it is often negligibly small and thus in practice is usually ignored, giving us the standard nonlinear wave equation: Nonlinearities as a wave-mixing process The nonlinear wave equation is an inhomogeneous differential equation. The general solution comes from the study of ordinary differential equations and can be obtained by the use of a Green's function. Physically one gets the normal electromagnetic wave solutions to the homogeneous part of the wave equation: and the inhomogeneous term acts as a driver/source of the electromagnetic waves. One of the consequences of this is a nonlinear interaction that results in energy being mixed or coupled between different frequencies, which is often called a "wave mixing". In general, an n-th order nonlinearity will lead to (n + 1)-wave mixing. As an example, if we consider only a second-order nonlinearity (three-wave mixing), then the polarization P takes the form If we assume that E(t) is made up of two components at frequencies ω1 and ω2, we can write E(t) as and using Euler's formula to convert to exponentials, where "c.c." stands for complex conjugate. Plugging this into the expression for P gives which has frequency components at 2ω1, 2ω2, ω1 + ω2, ω1 − ω2, and 0. These three-wave mixing processes correspond to the nonlinear effects known as second-harmonic generation, sum-frequency generation, difference-frequency generation and optical rectification respectively. Note: Parametric generation and amplification is a variation of difference-frequency generation, where the lower frequency of one of the two generating fields is much weaker (parametric amplification) or completely absent (parametric generation). In the latter case, the fundamental quantum-mechanical uncertainty in the electric field initiates the process. Phase matching The above ignores the position dependence of the electrical fields. In a typical situation, the electrical fields are traveling waves described by at position , with the wave vector , where is the velocity of light in vacuum, and is the index of refraction of the medium at angular frequency . Thus, the second-order polarization at angular frequency is At each position within the nonlinear medium, the oscillating second-order polarization radiates at angular frequency and a corresponding wave vector . Constructive interference, and therefore a high-intensity field, will occur only if The above equation is known as the phase-matching condition. Typically, three-wave mixing is done in a birefringent crystalline material, where the refractive index depends on the polarization and direction of the light that passes through. The polarizations of the fields and the orientation of the crystal are chosen such that the phase-matching condition is fulfilled. This phase-matching technique is called angle tuning. Typically a crystal has three axes, one or two of which have a different refractive index than the other one(s). Uniaxial crystals, for example, have a single preferred axis, called the extraordinary (e) axis, while the other two are ordinary axes (o) (see crystal optics). There are several schemes of choosing the polarizations for this crystal type. If the signal and idler have the same polarization, it is called "type-I phase matching", and if their polarizations are perpendicular, it is called "type-II phase matching". However, other conventions exist that specify further which frequency has what polarization relative to the crystal axis. These types are listed below, with the convention that the signal wavelength is shorter than the idler wavelength. Most common nonlinear crystals are negative uniaxial, which means that the e axis has a smaller refractive index than the o axes. In those crystals, type-I and -II phase matching are usually the most suitable schemes. In positive uniaxial crystals, types VII and VIII are more suitable. Types II and III are essentially equivalent, except that the names of signal and idler are swapped when the signal has a longer wavelength than the idler. For this reason, they are sometimes called IIA and IIB. The type numbers V–VIII are less common than I and II and variants. One undesirable effect of angle tuning is that the optical frequencies involved do not propagate collinearly with each other. This is due to the fact that the extraordinary wave propagating through a birefringent crystal possesses a Poynting vector that is not parallel to the propagation vector. This would lead to beam walk-off, which limits the nonlinear optical conversion efficiency. Two other methods of phase matching avoid beam walk-off by forcing all frequencies to propagate at a 90° with respect to the optical axis of the crystal. These methods are called temperature tuning and quasi-phase-matching. Temperature tuning is used when the pump (laser) frequency polarization is orthogonal to the signal and idler frequency polarization. The birefringence in some crystals, in particular lithium niobate is highly temperature-dependent. The crystal temperature is controlled to achieve phase-matching conditions. The other method is quasi-phase-matching. In this method the frequencies involved are not constantly locked in phase with each other, instead the crystal axis is flipped at a regular interval Λ, typically 15 micrometres in length. Hence, these crystals are called periodically poled. This results in the polarization response of the crystal to be shifted back in phase with the pump beam by reversing the nonlinear susceptibility. This allows net positive energy flow from the pump into the signal and idler frequencies. In this case, the crystal itself provides the additional wavevector k = 2π/Λ (and hence momentum) to satisfy the phase-matching condition. Quasi-phase-matching can be expanded to chirped gratings to get more bandwidth and to shape an SHG pulse like it is done in a dazzler. SHG of a pump and self-phase modulation (emulated by second-order processes) of the signal and an optical parametric amplifier can be integrated monolithically. Higher-order frequency mixing The above holds for processes. It can be extended for processes where is nonzero, something that is generally true in any medium without any symmetry restrictions; in particular resonantly enhanced sum or difference frequency mixing in gasses is frequently used for extreme or "vacuum" ultra-violet light generation. In common scenarios, such as mixing in dilute gases, the non-linearity is weak and so the light beams are focused which, unlike the plane wave approximation used above, introduces a pi phase shift on each light beam, complicating the phase-matching requirements. Conveniently, difference frequency mixing with cancels this focal phase shift and often has a nearly self-canceling overall phase-matching condition, which relatively simplifies broad wavelength tuning compared to sum frequency generation. In all four frequencies are mixing simultaneously, as opposed to sequential mixing via two processes. The Kerr effect can be described as a as well. At high peak powers the Kerr effect can cause filamentation of light in air, in which the light travels without dispersion or divergence in a self-generated waveguide. At even high intensities the Taylor series, which led the domination of the lower orders, does not converge anymore and instead a time based model is used. When a noble gas atom is hit by an intense laser pulse, which has an electric field strength comparable to the Coulomb field of the atom, the outermost electron may be ionized from the atom. Once freed, the electron can be accelerated by the electric field of the light, first moving away from the ion, then back toward it as the field changes direction. The electron may then recombine with the ion, releasing its energy in the form of a photon. The light is emitted at every peak of the laser light field which is intense enough, producing a series of attosecond light flashes. The photon energies generated by this process can extend past the 800th harmonic order up to a few KeV. This is called high-order harmonic generation. The laser must be linearly polarized, so that the electron returns to the vicinity of the parent ion. High-order harmonic generation has been observed in noble gas jets, cells, and gas-filled capillary waveguides. Example uses Frequency doubling One of the most commonly used frequency-mixing processes is frequency doubling, or second-harmonic generation. With this technique, the 1064 nm output from Nd:YAG lasers or the 800 nm output from Ti:sapphire lasers can be converted to visible light, with wavelengths of 532 nm (green) or 400 nm (violet) respectively. Practically, frequency doubling is carried out by placing a nonlinear medium in a laser beam. While there are many types of nonlinear media, the most common media are crystals. Commonly used crystals are BBO (β-barium borate), KDP (potassium dihydrogen phosphate), KTP (potassium titanyl phosphate), and lithium niobate. These crystals have the necessary properties of being strongly birefringent (necessary to obtain phase matching, see below), having a specific crystal symmetry, being transparent for both the impinging laser light and the frequency-doubled wavelength, and having high damage thresholds, which makes them resistant against the high-intensity laser light. Optical phase conjugation It is possible, using nonlinear optical processes, to exactly reverse the propagation direction and phase variation of a beam of light. The reversed beam is called a conjugate beam, and thus the technique is known as optical phase conjugation (also called time reversal, wavefront reversal and is significantly different from retroreflection). A device producing the phase-conjugation effect is known as a phase-conjugate mirror (PCM). Principles One can interpret optical phase conjugation as being analogous to a real-time holographic process. In this case, the interacting beams simultaneously interact in a nonlinear optical material to form a dynamic hologram (two of the three input beams), or real-time diffraction pattern, in the material. The third incident beam diffracts at this dynamic hologram, and, in the process, reads out the phase-conjugate wave. In effect, all three incident beams interact (essentially) simultaneously to form several real-time holograms, resulting in a set of diffracted output waves that phase up as the "time-reversed" beam. In the language of nonlinear optics, the interacting beams result in a nonlinear polarization within the material, which coherently radiates to form the phase-conjugate wave. Reversal of wavefront means a perfect reversal of photons' linear momentum and angular momentum. The reversal of angular momentum means reversal of both polarization state and orbital angular momentum. Reversal of orbital angular momentum of optical vortex is due to the perfect match of helical phase profiles of the incident and reflected beams. Optical phase conjugation is implemented via stimulated Brillouin scattering, four-wave mixing, three-wave mixing, static linear holograms and some other tools. The most common way of producing optical phase conjugation is to use a four-wave mixing technique, though it is also possible to use processes such as stimulated Brillouin scattering. Four-wave mixing technique For the four-wave mixing technique, we can describe four beams (j = 1, 2, 3, 4) with electric fields: where Ej are the electric field amplitudes. Ξ1 and Ξ2 are known as the two pump waves, with Ξ3 being the signal wave, and Ξ4 being the generated conjugate wave. If the pump waves and the signal wave are superimposed in a medium with a non-zero χ(3), this produces a nonlinear polarization field: resulting in generation of waves with frequencies given by ω = ±ω1 ± ω2 ± ω3 in addition to third-harmonic generation waves with ω = 3ω1, 3ω2, 3ω3. As above, the phase-matching condition determines which of these waves is the dominant. By choosing conditions such that ω = ω1 + ω2 − ω3 and k = k1 + k2 − k3, this gives a polarization field: This is the generating field for the phase-conjugate beam, Ξ4. Its direction is given by k4 = k1 + k2 − k3, and so if the two pump beams are counterpropagating (k1 = −k2), then the conjugate and signal beams propagate in opposite directions (k4 = −k3). This results in the retroreflecting property of the effect. Further, it can be shown that for a medium with refractive index n and a beam interaction length l, the electric field amplitude of the conjugate beam is approximated by where c is the speed of light. If the pump beams E1 and E2 are plane (counterpropagating) waves, then that is, the generated beam amplitude is the complex conjugate of the signal beam amplitude. Since the imaginary part of the amplitude contains the phase of the beam, this results in the reversal of phase property of the effect. Note that the constant of proportionality between the signal and conjugate beams can be greater than 1. This is effectively a mirror with a reflection coefficient greater than 100%, producing an amplified reflection. The power for this comes from the two pump beams, which are depleted by the process. The frequency of the conjugate wave can be different from that of the signal wave. If the pump waves are of frequency ω1 = ω2 = ω, and the signal wave is higher in frequency such that ω3 = ω + Δω, then the conjugate wave is of frequency ω4 = ω − Δω. This is known as frequency flipping. Angular and linear momenta in optical phase conjugation Classical picture In classical Maxwell electrodynamics a phase-conjugating mirror performs reversal of the Poynting vector: ("in" means incident field, "out" means reflected field) where which is a linear momentum density of electromagnetic field. In the same way a phase-conjugated wave has an opposite angular momentum density vector with respect to incident field: The above identities are valid locally, i.e. in each space point in a given moment for an ideal phase-conjugating mirror. Quantum picture In quantum electrodynamics the photon with energy also possesses linear momentum and angular momentum, whose projection on propagation axis is , where is topological charge of photon, or winding number, is propagation axis. The angular momentum projection on propagation axis has discrete values . In quantum electrodynamics the interpretation of phase conjugation is much simpler compared to classical electrodynamics. The photon reflected from phase conjugating-mirror (out) has opposite directions of linear and angular momenta with respect to incident photon (in): Nonlinear optical pattern formation Optical fields transmitted through nonlinear Kerr media can also display pattern formation owing to the nonlinear medium amplifying spatial and temporal noise. The effect is referred to as optical modulation instability. This has been observed both in photo-refractive, photonic lattices, as well as photo-reactive systems. In the latter case, optical nonlinearity is afforded by reaction-induced increases in refractive index. Examples of pattern formation are spatial solitons and vortex lattices in framework of nonlinear Schrödinger equation. Molecular nonlinear optics The early studies of nonlinear optics and materials focused on the inorganic solids. With the development of nonlinear optics, molecular optical properties were investigated, forming molecular nonlinear optics. The traditional approaches used in the past to enhance nonlinearities include extending chromophore π-systems, adjusting bond length alternation, inducing intramolecular charge transfer, extending conjugation in 2D, and engineering multipolar charge distributions. Recently, many novel directions were proposed for enhanced nonlinearity and light manipulation, including twisted chromophores, combining rich density of states with bond alternation, microscopic cascading of second-order nonlinearity, etc. Due to the distinguished advantages, molecular nonlinear optics have been widely used in the biophotonics field, including bioimaging, phototherapy, biosensing, etc. Connecting bulk properties to microscopic properties Molecular nonlinear optics relate optical properties of bulk matter to their microscopic molecular properties. Just as the polarizability can be described as a Taylor series expansion, one can expand the induced dipole moment in powers of the electric field: , where μ is the polarizability, α is the first hyperpolarizability, β is the second hyperpolarizability, and so on. Novel Nonlinear Media Certain molecular materials have the ability to be optimized for their optical nonlinearity at the microscopic and bulk levels. Due to the delocalization of electrons in π bonds electrons are more easily responsive to applied optical fields and tend to produce larger linear and nonlinear optical responses than those in single (𝜎) bonds. In these systems linear response scales with the length of the conjugated pi system, while nonlinear response scales even more rapidly. One of the many applications of molecular nonlinear optics is the use in nonlinear bioimaging. These nonlinear materials, like multi-photon chromophores, are used as biomarkers for two-photon spectroscopy, in which  the attenuation of incident light intensity as it passes through the sample is written as . where N is the number of particles per unit volume, I is intensity of light, and δ is the two photon absorption cross section. The resulting signal adopts a Lorentzian lineshape with a cross-section proportional to the difference in dipole moments of ground and final states. Similar highly conjugated chromophores with strong donor-acceptor characteristics are used due to their large difference in the dipole moments, and current efforts in extending their pi-conjugated systems to enhance their nonlinear optical properties are being made. Common second-harmonic-generating (SHG) materials Ordered by pump wavelength: 800 nm: BBO 806 nm: lithium iodate (LiIO3) 860 nm: potassium niobate (KNbO3) 980 nm: KNbO3 1064 nm: monopotassium phosphate (KH2PO4, KDP), lithium triborate (LBO) and β-barium borate (BBO) 1300 nm: gallium selenide (GaSe) 1319 nm: KNbO3, BBO, KDP, potassium titanyl phosphate (KTP), lithium niobate (LiNbO3), LiIO3, and ammonium dihydrogen phosphate (ADP) 1550 nm: potassium titanyl phosphate (KTP), lithium niobate (LiNbO3)
Physical sciences
Optics
Physics
21784
https://en.wikipedia.org/wiki/Nova
Nova
A nova ( novae or novas) is a transient astronomical event that causes the sudden appearance of a bright, apparently "new" star (hence the name "nova", Latin for "new") that slowly fades over weeks or months. All observed novae involve white dwarfs in close binary systems, but causes of the dramatic appearance of a nova vary, depending on the circumstances of the two progenitor stars. The main sub-classes of novae are classical novae, recurrent novae (RNe), and dwarf novae. They are all considered to be cataclysmic variable stars. Classical nova eruptions are the most common type. This type is usually created in a close binary star system consisting of a white dwarf and either a main sequence, subgiant, or red giant star. If the orbital period of the system is a few days or less, the white dwarf is close enough to its companion star to draw accreted matter onto its surface, creating a dense but shallow atmosphere. This atmosphere, mostly consisting of hydrogen, is heated by the hot white dwarf and eventually reaches a critical temperature, causing ignition of rapid runaway fusion. The sudden increase in energy expels the atmosphere into interstellar space, creating the envelope seen as visible light during the nova event. In past centuries such an event was thought to be a new star. A few novae produce short-lived nova remnants, lasting for perhaps several centuries. A recurrent nova involves the same processes as a classical nova, except that the nova event repeats in cycles of a few decades or less as the companion star again feeds the dense atmosphere of the white dwarf after each ignition, as in the star T Coronae Borealis. Under certain conditions, mass accretion can eventually trigger runaway fusion that destroys the white dwarf rather than merely expelling its atmosphere. In this case, the event is usually classified as a Type Ia supernova. Novae most often occur in the sky along the path of the Milky Way, especially near the observed Galactic Center in Sagittarius; however, they can appear anywhere in the sky. They occur far more frequently than galactic supernovae, averaging about ten per year in the Milky Way. Most are found telescopically, perhaps only one every 12–18 months reaching naked-eye visibility. Novae reaching first or second magnitude occur only a few times per century. The last bright nova was V1369 Centauri, which reached 3.3 magnitude on 14 December 2013. Etymology During the sixteenth century, astronomer Tycho Brahe observed the supernova SN 1572 in the constellation Cassiopeia. He described it in his book De nova stella (Latin for "concerning the new star"), giving rise to the adoption of the name nova. In this work he argued that a nearby object should be seen to move relative to the fixed stars, and thus the nova had to be very far away. Although SN 1572 was later found to be a supernova and not a nova, the terms were considered interchangeable until the 1930s. After this, novae were called classical novae to distinguish them from supernovae, as their causes and energies were thought to be different, based solely on the observational evidence. Although the term "stella nova" means "new star", novae most often take place on white dwarfs, which are remnants of extremely old stars. Stellar evolution of novae Evolution of potential novae begins with two main sequence stars in a binary system. One of the two evolves into a red giant, leaving its remnant white dwarf core in orbit with the remaining star. The second star—which may be either a main-sequence star or an aging giant—begins to shed its envelope onto its white dwarf companion when it overflows its Roche lobe. As a result, the white dwarf steadily captures matter from the companion's outer atmosphere in an accretion disk, and in turn, the accreted matter falls into the atmosphere. As the white dwarf consists of degenerate matter, the accreted hydrogen is unable to expand even though its temperature increases. Runaway fusion occurs when the temperature of this atmospheric layer reaches ~20 million K, initiating nuclear burning via the CNO cycle. If the accretion rate is just right, hydrogen fusion may occur in a stable manner on the surface of the white dwarf, giving rise to a supersoft X-ray source, but for most binary system parameters, the hydrogen burning is thermally unstable and rapidly converts a large amount of the hydrogen into other, heavier chemical elements in a runaway reaction, liberating an enormous amount of energy. This blows the remaining gases away from the surface of the white dwarf and produces an extremely bright outburst of light. The rise to peak brightness may be very rapid, or gradual; after the peak, the brightness declines steadily. The time taken for a nova to decay by 2 or 3 magnitudes from maximum optical brightness is used for grouping novae into speed classes. Fast novae typically will take less than 25 days to decay by 2 magnitudes, while slow novae will take more than 80 days. Despite its violence, usually the amount of material ejected in a nova is only about of a solar mass, quite small relative to the mass of the white dwarf. Furthermore, only five percent of the accreted mass is fused during the power outburst. Nonetheless, this is enough energy to accelerate nova ejecta to velocities as high as several thousand kilometers per second—higher for fast novae than slow ones—with a concurrent rise in luminosity from a few times solar to 50,000–100,000 times solar. In 2010 scientists using NASA's Fermi Gamma-ray Space Telescope discovered that a nova also can emit gamma rays (>100 MeV). Potentially, a white dwarf can generate multiple novae over time as additional hydrogen continues to accrete onto its surface from its companion star. Where this repeated flaring is observed, the object is called a recurrent nova. An example is RS Ophiuchi, which is known to have flared seven times (in 1898, 1933, 1958, 1967, 1985, 2006, and 2021). Eventually, the white dwarf can explode as a Type Ia supernova if it approaches the Chandrasekhar limit. Occasionally, novae are bright enough and close enough to Earth to be conspicuous to the unaided eye. The brightest recent example was Nova Cygni 1975. This nova appeared on 29 August 1975, in the constellation Cygnus about 5 degrees north of Deneb, and reached magnitude 2.0 (nearly as bright as Deneb). The most recent were V1280 Scorpii, which reached magnitude 3.7 on 17 February 2007, and Nova Delphini 2013. Nova Centauri 2013 was discovered 2 December 2013 and so far is the brightest nova of this millennium, reaching magnitude 3.3. Helium novae A helium nova (undergoing a helium flash) is a proposed category of nova event that lacks hydrogen lines in its spectrum. The absence of hydrogen lines may be caused by the explosion of a helium shell on a white dwarf. The theory was first proposed in 1989, and the first candidate helium nova to be observed was V445 Puppis, in 2000. Since then, four other novae have been proposed as helium novae. Occurrence rate and astrophysical significance Astronomers have estimated that the Milky Way experiences roughly 25 to 75 novae per year. The number of novae actually observed in the Milky Way each year is much lower, about 10, probably because distant novae are obscured by gas and dust absorption. As of 2019, 407 probable novae had been recorded in the Milky Way. In the Andromeda Galaxy, roughly 25 novae brighter than about 20th magnitude are discovered each year, and smaller numbers are seen in other nearby galaxies. Spectroscopic observation of nova ejecta nebulae has shown that they are enriched in elements such as helium, carbon, nitrogen, oxygen, neon, and magnesium. Classical nova explosions are galactic producers of the element lithium. The contribution of novae to the interstellar medium is not great; novae supply only as much material to the galaxy as do supernovae, and only as much as red giant and supergiant stars. Observed recurrent novae such as RS Ophiuchi (those with periods on the order of decades) are rare. Astronomers theorize, however, that most, if not all, novae recur, albeit on time scales ranging from 1,000 to 100,000 years. The recurrence interval for a nova is less dependent on the accretion rate of the white dwarf than on its mass; with their powerful gravity, massive white dwarfs require less accretion to fuel an eruption than lower-mass ones. Consequently, the interval is shorter for high-mass white dwarfs. V Sagittae is unusual in that the time of its next eruption can be predicted fairly accurately; it is expected to recur in approximately 2083, plus or minus about 11 years. Subtypes Novae are classified according to the light curve decay speed, referred to as either type A, B, C and R, or using the prefix "N": NA: fast novae, with a rapid brightness increase, followed by a brightness decline of 3 magnitudes—to about brightness—within 100 days. NB: slow novae, with a brightness decline of 3 magnitudes in 150 days or more. NC: very slow novae, also known as symbiotic novae, staying at maximum light for a decade or more and then fading very slowly. NR/RN: recurrent novae, where two or more eruptions separated by 80 years or less have been observed. These are generally also fast. Remnants Some novae leave behind visible nebulosity, material expelled in the nova explosion or in multiple explosions. Novae as distance indicators Novae have some promise for use as standard candle measurements of distances. For instance, the distribution of their absolute magnitude is bimodal, with a main peak at magnitude −8.8, and a lesser one at −7.5. Novae also have roughly the same absolute magnitude 15 days after their peak (−5.5). Nova-based distance estimates to various nearby galaxies and galaxy clusters have been shown to be of comparable accuracy to those measured with Cepheid variable stars. Recurrent novae A recurrent nova (RN) is an object that has been seen to experience repeated nova eruptions. The recurrent nova typically brightens by about 9 magnitudes, whereas a classical nova may brighten by more than 12 magnitudes. Although it is estimated that as many as a quarter of nova systems experience multiple eruptions, only ten recurrent novae (listed below) have been observed in the Milky Way. Several extragalactic recurrent novae have been observed in the Andromeda Galaxy (M31) and the Large Magellanic Cloud. One of these extragalactic novae, M31N 2008-12a, erupts as frequently as once every 12 months. On 20 April 2016, the Sky & Telescope website reported a sustained brightening of T Coronae Borealis from magnitude 10.5 to about 9.2 starting in February 2015. A similar event had been reported in 1938, followed by another outburst in 1946. By June 2018, the star had dimmed slightly but still remained at an unusually high level of activity. In March or April 2023, it dimmed to magnitude 12.3. A similar dimming occurred in the year before the 1945 outburst, indicating that it would likely erupt between March and September 2024. this predicted outburst has not yet occurred. Extragalactic novae Novae are relatively common in the Andromeda Galaxy (M31); several dozen novae (brighter than apparent magnitude +20) are discovered in M31 each year. The Central Bureau for Astronomical Telegrams (CBAT) has tracked novae in M31, M33, and M81.
Physical sciences
Stellar astronomy
null
21785
https://en.wikipedia.org/wiki/Nuclear%20weapon
Nuclear weapon
A nuclear weapon is an explosive device that derives its destructive force from nuclear reactions, either fission (fission or atomic bomb) or a combination of fission and fusion reactions (thermonuclear bomb), producing a nuclear explosion. Both bomb types release large quantities of energy from relatively small amounts of matter. The first test of a fission bomb released an amount of energy approximately equal to . The first thermonuclear ("hydrogen") bomb test released energy approximately equal to . Nuclear bombs have had yields between 10 tons TNT (the W54) and 50 megatons for the Tsar Bomba (see TNT equivalent). A thermonuclear weapon weighing as little as can release energy equal to more than (this is nearly the record for the ratio between yield and weapon weight, achieved with the W56). A nuclear device no larger than a conventional bomb can devastate an entire city by blast, fire, and radiation. Since they are weapons of mass destruction, the proliferation of nuclear weapons is a focus of international relations policy. Nuclear weapons have been deployed twice in war, both by the United States against the Japanese cities of Hiroshima and Nagasaki in 1945 during World War II. Testing and deployment Nuclear weapons have only twice been used in warfare, both times by the United States against Japan at the end of World War II. On August 6, 1945, the United States Army Air Forces (USAAF) detonated a uranium gun-type fission bomb nicknamed "Little Boy" over the Japanese city of Hiroshima; three days later, on August 9, the USAAF detonated a plutonium implosion-type fission bomb nicknamed "Fat Man" over the Japanese city of Nagasaki. These bombings caused injuries that resulted in the deaths of approximately 200,000 civilians and military personnel. The ethics of these bombings and their role in Japan's surrender are to this day, still subjects of debate. Since the atomic bombings of Hiroshima and Nagasaki, nuclear weapons have been detonated over 2,000 times for testing and demonstration. Only a few nations possess such weapons or are suspected of seeking them. The only countries known to have detonated nuclear weapons—and acknowledge possessing them—are (chronologically by date of first test) the United States, the Soviet Union (succeeded as a nuclear power by Russia), the United Kingdom, France, China, India, Pakistan, and North Korea. Israel is believed to possess nuclear weapons, though, in a policy of deliberate ambiguity, it does not acknowledge having them. Germany, Italy, Turkey, Belgium, the Netherlands, and Belarus are nuclear weapons sharing states. South Africa is the only country to have independently developed and then renounced and dismantled its nuclear weapons. The Treaty on the Non-Proliferation of Nuclear Weapons aims to reduce the spread of nuclear weapons, but there are different views of its effectiveness. Types There are two basic types of nuclear weapons: those that derive the majority of their energy from nuclear fission reactions alone, and those that use fission reactions to begin nuclear fusion reactions that produce a large amount of the total energy output. Fission weapons All existing nuclear weapons derive some of their explosive energy from nuclear fission reactions. Weapons whose explosive output is exclusively from fission reactions are commonly referred to as atomic bombs or atom bombs (abbreviated as A-bombs). This has long been noted as something of a misnomer, as their energy comes from the nucleus of the atom, just as it does with fusion weapons. In fission weapons, a mass of fissile material (enriched uranium or plutonium) is forced into supercriticality—allowing an exponential growth of nuclear chain reactions—either by shooting one piece of sub-critical material into another (the "gun" method) or by compression of a sub-critical sphere or cylinder of fissile material using chemically fueled explosive lenses. The latter approach, the "implosion" method, is more sophisticated and more efficient (smaller, less massive, and requiring less of the expensive fissile fuel) than the former. A major challenge in all nuclear weapon designs is to ensure that a significant fraction of the fuel is consumed before the weapon destroys itself. The amount of energy released by fission bombs can range from the equivalent of just under a ton to upwards of 500,000 tons (500 kilotons) of TNT (). All fission reactions generate fission products, the remains of the split atomic nuclei. Many fission products are either highly radioactive (but short-lived) or moderately radioactive (but long-lived), and as such, they are a serious form of radioactive contamination. Fission products are the principal radioactive component of nuclear fallout. Another source of radioactivity is the burst of free neutrons produced by the weapon. When they collide with other nuclei in the surrounding material, the neutrons transmute those nuclei into other isotopes, altering their stability and making them radioactive. The most commonly used fissile materials for nuclear weapons applications have been uranium-235 and plutonium-239. Less commonly used has been uranium-233. Neptunium-237 and some isotopes of americium may be usable for nuclear explosives as well, but it is not clear that this has ever been implemented, and their plausible use in nuclear weapons is a matter of dispute. Fusion weapons The other basic type of nuclear weapon produces a large proportion of its energy in nuclear fusion reactions. Such fusion weapons are generally referred to as thermonuclear weapons or more colloquially as hydrogen bombs (abbreviated as H-bombs), as they rely on fusion reactions between isotopes of hydrogen (deuterium and tritium). All such weapons derive a significant portion of their energy from fission reactions used to "trigger" fusion reactions, and fusion reactions can themselves trigger additional fission reactions. Only six countries—the United States, Russia, the United Kingdom, China, France, and India—have conducted thermonuclear weapon tests. Whether India has detonated a "true" multi-staged thermonuclear weapon is controversial. North Korea claims to have tested a fusion weapon , though this claim is disputed. Thermonuclear weapons are considered much more difficult to successfully design and execute than primitive fission weapons. Almost all of the nuclear weapons deployed today use the thermonuclear design because it results in an explosion hundreds of times stronger than that of a fission bomb of similar weight. Thermonuclear bombs work by using the energy of a fission bomb to compress and heat fusion fuel. In the Teller-Ulam design, which accounts for all multi-megaton yield hydrogen bombs, this is accomplished by placing a fission bomb and fusion fuel (tritium, deuterium, or lithium deuteride) in proximity within a special, radiation-reflecting container. When the fission bomb is detonated, gamma rays and X-rays emitted first compress the fusion fuel, then heat it to thermonuclear temperatures. The ensuing fusion reaction creates enormous numbers of high-speed neutrons, which can then induce fission in materials not normally prone to it, such as depleted uranium. Each of these components is known as a "stage", with the fission bomb as the "primary" and the fusion capsule as the "secondary". In large, megaton-range hydrogen bombs, about half of the yield comes from the final fissioning of depleted uranium. Virtually all thermonuclear weapons deployed today use the "two-stage" design described to the right, but it is possible to add additional fusion stages—each stage igniting a larger amount of fusion fuel in the next stage. This technique can be used to construct thermonuclear weapons of arbitrarily large yield. This is in contrast to fission bombs, which are limited in their explosive power due to criticality danger (premature nuclear chain reaction caused by too-large amounts of pre-assembled fissile fuel). The largest nuclear weapon ever detonated, the Tsar Bomba of the USSR, which released an energy equivalent of over , was a three-stage weapon. Most thermonuclear weapons are considerably smaller than this, due to practical constraints from missile warhead space and weight requirements. In the early 1950s the Livermore Laboratory in the United States had plans for the testing of two massive bombs, Gnomon and Sundial, 1 gigaton of TNT and 10 gigatons of TNT respectively. Fusion reactions do not create fission products, and thus contribute far less to the creation of nuclear fallout than fission reactions, but because all thermonuclear weapons contain at least one fission stage, and many high-yield thermonuclear devices have a final fission stage, thermonuclear weapons can generate at least as much nuclear fallout as fission-only weapons. Furthermore, high yield thermonuclear explosions (most dangerously ground bursts) have the force to lift radioactive debris upwards past the tropopause into the stratosphere, where the calm non-turbulent winds permit the debris to travel great distances from the burst, eventually settling and unpredictably contaminating areas far removed from the target of the explosion. Other types There are other types of nuclear weapons as well. For example, a boosted fission weapon is a fission bomb that increases its explosive yield through a small number of fusion reactions, but it is not a fusion bomb. In the boosted bomb, the neutrons produced by the fusion reactions serve primarily to increase the efficiency of the fission bomb. There are two types of boosted fission bomb: internally boosted, in which a deuterium-tritium mixture is injected into the bomb core, and externally boosted, in which concentric shells of lithium-deuteride and depleted uranium are layered on the outside of the fission bomb core. The external method of boosting enabled the USSR to field the first partially thermonuclear weapons, but it is now obsolete because it demands a spherical bomb geometry, which was adequate during the 1950s arms race when bomber aircraft were the only available delivery vehicles. The detonation of any nuclear weapon is accompanied by a blast of neutron radiation. Surrounding a nuclear weapon with suitable materials (such as cobalt or gold) creates a weapon known as a salted bomb. This device can produce exceptionally large quantities of long-lived radioactive contamination. It has been conjectured that such a device could serve as a "doomsday weapon" because such a large quantity of radioactivities with half-lives of decades, lifted into the stratosphere where winds would distribute it around the globe, would make all life on the planet extinct. In connection with the Strategic Defense Initiative, research into the nuclear pumped laser was conducted under the DOD program Project Excalibur but this did not result in a working weapon. The concept involves the tapping of the energy of an exploding nuclear bomb to power a single-shot laser that is directed at a distant target. During the Starfish Prime high-altitude nuclear test in 1962, an unexpected effect was produced which is called a nuclear electromagnetic pulse. This is an intense flash of electromagnetic energy produced by a rain of high-energy electrons which in turn are produced by a nuclear bomb's gamma rays. This flash of energy can permanently destroy or disrupt electronic equipment if insufficiently shielded. It has been proposed to use this effect to disable an enemy's military and civilian infrastructure as an adjunct to other nuclear or conventional military operations. By itself it could as well be useful to terrorists for crippling a nation's economic electronics-based infrastructure. Because the effect is most effectively produced by high altitude nuclear detonations (by military weapons delivered by air, though ground bursts also produce EMP effects over a localized area), it can produce damage to electronics over a wide, even continental, geographical area. Research has been done into the possibility of pure fusion bombs: nuclear weapons that consist of fusion reactions without requiring a fission bomb to initiate them. Such a device might provide a simpler path to thermonuclear weapons than one that required the development of fission weapons first, and pure fusion weapons would create significantly less nuclear fallout than other thermonuclear weapons because they would not disperse fission products. In 1998, the United States Department of Energy divulged that the United States had, "...made a substantial investment" in the past to develop pure fusion weapons, but that, "The U.S. does not have and is not developing a pure fusion weapon", and that, "No credible design for a pure fusion weapon resulted from the DOE investment". Nuclear isomers provide a possible pathway to fissionless fusion bombs. These are naturally occurring isotopes (178m2Hf being a prominent example) which exist in an elevated energy state. Mechanisms to release this energy as bursts of gamma radiation (as in the hafnium controversy) have been proposed as possible triggers for conventional thermonuclear reactions. Antimatter, which consists of particles resembling ordinary matter particles in most of their properties but having opposite electric charge, has been considered as a trigger mechanism for nuclear weapons. A major obstacle is the difficulty of producing antimatter in large enough quantities, and there is no evidence that it is feasible beyond the military domain. However, the US Air Force funded studies of the physics of antimatter in the Cold War, and began considering its possible use in weapons, not just as a trigger, but as the explosive itself. A fourth generation nuclear weapon design is related to, and relies upon, the same principle as antimatter-catalyzed nuclear pulse propulsion. Most variation in nuclear weapon design is for the purpose of achieving different yields for different situations, and in manipulating design elements to attempt to minimize weapon size, radiation hardness or requirements for special materials, especially fissile fuel or tritium. Tactical nuclear weapons Some nuclear weapons are designed for special purposes; most of these are for non-strategic (decisively war-winning) purposes and are referred to as tactical nuclear weapons. The neutron bomb purportedly conceived by Sam Cohen is a thermonuclear weapon that yields a relatively small explosion but a relatively large amount of neutron radiation. Such a weapon could, according to tacticians, be used to cause massive biological casualties while leaving inanimate infrastructure mostly intact and creating minimal fallout. Because high energy neutrons are capable of penetrating dense matter, such as tank armor, neutron warheads were procured in the 1980s (though not deployed in Europe) for use as tactical payloads for US Army artillery shells (200 mm W79 and 155 mm W82) and short range missile forces. Soviet authorities announced similar intentions for neutron warhead deployment in Europe; indeed, they claimed to have originally invented the neutron bomb, but their deployment on USSR tactical nuclear forces is unverifiable. A type of nuclear explosive most suitable for use by ground special forces was the Special Atomic Demolition Munition, or SADM, sometimes popularly known as a suitcase nuke. This is a nuclear bomb that is man-portable, or at least truck-portable, and though of a relatively small yield (one or two kilotons) is sufficient to destroy important tactical targets such as bridges, dams, tunnels, important military or commercial installations, etc. either behind enemy lines or pre-emptively on friendly territory soon to be overtaken by invading enemy forces. These weapons require plutonium fuel and are particularly "dirty". They also demand especially stringent security precautions in their storage and deployment. Small "tactical" nuclear weapons were deployed for use as antiaircraft weapons. Examples include the USAF AIR-2 Genie, the AIM-26 Falcon and US Army Nike Hercules. Missile interceptors such as the Sprint and the Spartan also used small nuclear warheads (optimized to produce neutron or X-ray flux) but were for use against enemy strategic warheads. Other small, or tactical, nuclear weapons were deployed by naval forces for use primarily as antisubmarine weapons. These included nuclear depth bombs or nuclear armed torpedoes. Nuclear mines for use on land or at sea are also possibilities. Weapons delivery The system used to deliver a nuclear weapon to its target is an important factor affecting both nuclear weapon design and nuclear strategy. The design, development, and maintenance of delivery systems are among the most expensive parts of a nuclear weapons program; they account, for example, for 57% of the financial resources spent by the United States on nuclear weapons projects since 1940. The simplest method for delivering a nuclear weapon is a gravity bomb dropped from aircraft; this was the method used by the United States against Japan in 1945. This method places few restrictions on the size of the weapon. It does, however, limit attack range, response time to an impending attack, and the number of weapons that a country can field at the same time. With miniaturization, nuclear bombs can be delivered by both strategic bombers and tactical fighter-bombers. This method is the primary means of nuclear weapons delivery; the majority of US nuclear warheads, for example, are free-fall gravity bombs, namely the B61, which is being improved upon to this day. Preferable from a strategic point of view is a nuclear weapon mounted on a missile, which can use a ballistic trajectory to deliver the warhead over the horizon. Although even short-range missiles allow for a faster and less vulnerable attack, the development of long-range intercontinental ballistic missiles (ICBMs) and submarine-launched ballistic missiles (SLBMs) has given some nations the ability to plausibly deliver missiles anywhere on the globe with a high likelihood of success. More advanced systems, such as multiple independently targetable reentry vehicles (MIRVs), can launch multiple warheads at different targets from one missile, reducing the chance of a successful missile defense. Today, missiles are most common among systems designed for delivery of nuclear weapons. Making a warhead small enough to fit onto a missile, though, can be difficult. Tactical weapons have involved the most variety of delivery types, including not only gravity bombs and missiles but also artillery shells, land mines, and nuclear depth charges and torpedoes for anti-submarine warfare. An atomic mortar has been tested by the United States. Small, two-man portable tactical weapons (somewhat misleadingly referred to as suitcase bombs), such as the Special Atomic Demolition Munition, have been developed, although the difficulty of combining sufficient yield with portability limits their military utility. Nuclear strategy Nuclear warfare strategy is a set of policies that deal with preventing or fighting a nuclear war. The policy of trying to prevent an attack by a nuclear weapon from another country by threatening nuclear retaliation is known as the strategy of nuclear deterrence. The goal in deterrence is to always maintain a second strike capability (the ability of a country to respond to a nuclear attack with one of its own) and potentially to strive for first strike status (the ability to destroy an enemy's nuclear forces before they could retaliate). During the Cold War, policy and military theorists considered the sorts of policies that might prevent a nuclear attack, and they developed game theory models that could lead to stable deterrence conditions. Different forms of nuclear weapons delivery (see above) allow for different types of nuclear strategies. The goals of any strategy are generally to make it difficult for an enemy to launch a pre-emptive strike against the weapon system and difficult to defend against the delivery of the weapon during a potential conflict. This can mean keeping weapon locations hidden, such as deploying them on submarines or land mobile transporter erector launchers whose locations are difficult to track, or it can mean protecting weapons by burying them in hardened missile silo bunkers. Other components of nuclear strategies included using missile defenses to destroy the missiles before they land or implementing civil defense measures using early-warning systems to evacuate citizens to safe areas before an attack. Weapons designed to threaten large populations or to deter attacks are known as strategic weapons. Nuclear weapons for use on a battlefield in military situations are called tactical weapons. Critics of nuclear war strategy often suggest that a nuclear war between two nations would result in mutual annihilation. From this point of view, the significance of nuclear weapons is to deter war because any nuclear war would escalate out of mutual distrust and fear, resulting in mutually assured destruction. This threat of national, if not global, destruction has been a strong motivation for anti-nuclear weapons activism. Critics from the peace movement and within the military establishment have questioned the usefulness of such weapons in the current military climate. According to an advisory opinion issued by the International Court of Justice in 1996, the use of (or threat of use of) such weapons would generally be contrary to the rules of international law applicable in armed conflict, but the court did not reach an opinion as to whether or not the threat or use would be lawful in specific extreme circumstances such as if the survival of the state were at stake. Another deterrence position is that nuclear proliferation can be desirable. In this case, it is argued that, unlike conventional weapons, nuclear weapons deter all-out war between states, and they succeeded in doing this during the Cold War between the US and the Soviet Union. In the late 1950s and early 1960s, Gen. Pierre Marie Gallois of France, an adviser to Charles de Gaulle, argued in books like The Balance of Terror: Strategy for the Nuclear Age (1961) that mere possession of a nuclear arsenal was enough to ensure deterrence, and thus concluded that the spread of nuclear weapons could increase international stability. Some prominent neo-realist scholars, such as Kenneth Waltz and John Mearsheimer, have argued, along the lines of Gallois, that some forms of nuclear proliferation would decrease the likelihood of total war, especially in troubled regions of the world where there exists a single nuclear-weapon state. Aside from the public opinion that opposes proliferation in any form, there are two schools of thought on the matter: those, like Mearsheimer, who favored selective proliferation, and Waltz, who was somewhat more non-interventionist. Interest in proliferation and the stability-instability paradox that it generates continues to this day, with ongoing debate about indigenous Japanese and South Korean nuclear deterrent against North Korea. The threat of potentially suicidal terrorists possessing nuclear weapons (a form of nuclear terrorism) complicates the decision process. The prospect of mutually assured destruction might not deter an enemy who expects to die in the confrontation. Further, if the initial act is from a stateless terrorist instead of a sovereign nation, there might not be a nation or specific target to retaliate against. It has been argued, especially after the September 11, 2001, attacks, that this complication calls for a new nuclear strategy, one that is distinct from that which gave relative stability during the Cold War. Since 1996, the United States has had a policy of allowing the targeting of its nuclear weapons at terrorists armed with weapons of mass destruction. Robert Gallucci argues that although traditional deterrence is not an effective approach toward terrorist groups bent on causing a nuclear catastrophe, Gallucci believes that "the United States should instead consider a policy of expanded deterrence, which focuses not solely on the would-be nuclear terrorists but on those states that may deliberately transfer or inadvertently leak nuclear weapons and materials to them. By threatening retaliation against those states, the United States may be able to deter that which it cannot physically prevent.". Graham Allison makes a similar case, arguing that the key to expanded deterrence is coming up with ways of tracing nuclear material to the country that forged the fissile material. "After a nuclear bomb detonates, nuclear forensics cops would collect debris samples and send them to a laboratory for radiological analysis. By identifying unique attributes of the fissile material, including its impurities and contaminants, one could trace the path back to its origin." The process is analogous to identifying a criminal by fingerprints. "The goal would be twofold: first, to deter leaders of nuclear states from selling weapons to terrorists by holding them accountable for any use of their weapons; second, to give leaders every incentive to tightly secure their nuclear weapons and materials." According to the Pentagon's June 2019 "Doctrine for Joint Nuclear Operations" of the Joint Chiefs of Staffs website Publication, "Integration of nuclear weapons employment with conventional and special operations forces is essential to the success of any mission or operation." Governance, control, and law Because they are weapons of mass destruction, the proliferation and possible use of nuclear weapons are important issues in international relations and diplomacy. In most countries, the use of nuclear force can only be authorized by the head of government or head of state. Despite controls and regulations governing nuclear weapons, there is an inherent danger of "accidents, mistakes, false alarms, blackmail, theft, and sabotage". In the late 1940s, lack of mutual trust prevented the United States and the Soviet Union from making progress on arms control agreements. The Russell–Einstein Manifesto was issued in London on July 9, 1955, by Bertrand Russell in the midst of the Cold War. It highlighted the dangers posed by nuclear weapons and called for world leaders to seek peaceful resolutions to international conflict. The signatories included eleven pre-eminent intellectuals and scientists, including Albert Einstein, who signed it just days before his death on April 18, 1955. A few days after the release, philanthropist Cyrus S. Eaton offered to sponsor a conference—called for in the manifesto—in Pugwash, Nova Scotia, Eaton's birthplace. This conference was to be the first of the Pugwash Conferences on Science and World Affairs, held in July 1957. By the 1960s, steps were taken to limit both the proliferation of nuclear weapons to other countries and the environmental effects of nuclear testing. The Partial Nuclear Test Ban Treaty (1963) restricted all nuclear testing to underground nuclear testing, to prevent contamination from nuclear fallout, whereas the Treaty on the Non-Proliferation of Nuclear Weapons (1968) attempted to place restrictions on the types of activities signatories could participate in, with the goal of allowing the transference of non-military nuclear technology to member countries without fear of proliferation. In 1957, the International Atomic Energy Agency (IAEA) was established under the mandate of the United Nations to encourage development of peaceful applications of nuclear technology, provide international safeguards against its misuse, and facilitate the application of safety measures in its use. In 1996, many nations signed the Comprehensive Nuclear-Test-Ban Treaty, which prohibits all testing of nuclear weapons. A testing ban imposes a significant hindrance to nuclear arms development by any complying country. The Treaty requires the ratification by 44 specific states before it can go into force; , the ratification of eight of these states is still required. Additional treaties and agreements have governed nuclear weapons stockpiles between the countries with the two largest stockpiles, the United States and the Soviet Union, and later between the United States and Russia. These include treaties such as SALT II (never ratified), START I (expired), INF, START II (never in effect), SORT, and New START, as well as non-binding agreements such as SALT I and the Presidential Nuclear Initiatives of 1991. Even when they did not enter into force, these agreements helped limit and later reduce the numbers and types of nuclear weapons between the United States and the Soviet Union/Russia. Nuclear weapons have also been opposed by agreements between countries. Many nations have been declared Nuclear-Weapon-Free Zones, areas where nuclear weapons production and deployment are prohibited, through the use of treaties. The Treaty of Tlatelolco (1967) prohibited any production or deployment of nuclear weapons in Latin America and the Caribbean, and the Treaty of Pelindaba (1964) prohibits nuclear weapons in many African countries. As recently as 2006 a Central Asian Nuclear Weapon Free Zone was established among the former Soviet republics of Central Asia prohibiting nuclear weapons. In 1996, the International Court of Justice, the highest court of the United Nations, issued an Advisory Opinion concerned with the "Legality of the Threat or Use of Nuclear Weapons". The court ruled that the use or threat of use of nuclear weapons would violate various articles of international law, including the Geneva Conventions, the Hague Conventions, the UN Charter, and the Universal Declaration of Human Rights. Given the unique, destructive characteristics of nuclear weapons, the International Committee of the Red Cross calls on States to ensure that these weapons are never used, irrespective of whether they consider them lawful or not. Additionally, there have been other, specific actions meant to discourage countries from developing nuclear arms. In the wake of the tests by India and Pakistan in 1998, economic sanctions were (temporarily) levied against both countries, though neither were signatories with the Nuclear Non-Proliferation Treaty. One of the stated casus belli for the initiation of the 2003 Iraq War was an accusation by the United States that Iraq was actively pursuing nuclear arms (though this was soon discovered not to be the case as the program had been discontinued). In 1981, Israel had bombed a nuclear reactor being constructed in Osirak, Iraq, in what it called an attempt to halt Iraq's previous nuclear arms ambitions; in 2007, Israel bombed another reactor being constructed in Syria. In 2013, Mark Diesendorf said that governments of France, India, North Korea, Pakistan, UK, and South Africa have used nuclear power or research reactors to assist nuclear weapons development or to contribute to their supplies of nuclear explosives from military reactors. In 2017, 122 countries mainly in the Global South voted in favor of adopting the Treaty on the Prohibition of Nuclear Weapons, which eventually entered into force in 2021. The Doomsday Clock measures the likelihood of a human-made global catastrophe and is published annually by the Bulletin of the Atomic Scientists. The two years with the highest likelihood had previously been 1953, when the Clock was set to two minutes until midnight after the US and the Soviet Union began testing hydrogen bombs, and 2018, following the failure of world leaders to address tensions relating to nuclear weapons and climate change issues. In 2023, following the escalation of nuclear threats during the Russian invasion of Ukraine, the doomsday clock was set to 90 seconds, the highest likelihood of global catastrophe since the existence of the Doomsday Clock. As of 2024, Russia has intensified nuclear threats in Ukraine and is reportedly planning to place nuclear weapons in orbit, breaching the 1967 Outer Space Treaty. China is significantly expanding its nuclear arsenal, with projections of over 1,000 warheads by 2030 and up to 1,500 by 2035. North Korea is progressing in intercontinental ballistic missile tests and has a mutual-defense treaty with Russia, exchanging artillery for possible missile technology. Iran is currently viewed as a nuclear "threshold" state. Disarmament Nuclear disarmament refers to both the act of reducing or eliminating nuclear weapons and to the end state of a nuclear-free world, in which nuclear weapons are eliminated. Beginning with the 1963 Partial Test Ban Treaty and continuing through the 1996 Comprehensive Nuclear-Test-Ban Treaty, there have been many treaties to limit or reduce nuclear weapons testing and stockpiles. The 1968 Nuclear Non-Proliferation Treaty has as one of its explicit conditions that all signatories must "pursue negotiations in good faith" towards the long-term goal of "complete disarmament". The nuclear-weapon states have largely treated that aspect of the agreement as "decorative" and without force. Only one country—South Africa—has ever fully renounced nuclear weapons they had independently developed. The former Soviet republics of Belarus, Kazakhstan, and Ukraine returned Soviet nuclear arms stationed in their countries to Russia after the collapse of the USSR. Proponents of nuclear disarmament say that it would lessen the probability of nuclear war, especially accidentally. Critics of nuclear disarmament say that it would undermine the present nuclear peace and deterrence and would lead to increased global instability. Various American elder statesmen, who were in office during the Cold War period, have been advocating the elimination of nuclear weapons. These officials include Henry Kissinger, George Shultz, Sam Nunn, and William Perry. In January 2010, Lawrence M. Krauss stated that "no issue carries more importance to the long-term health and security of humanity than the effort to reduce, and perhaps one day, rid the world of nuclear weapons". In January 1986, Soviet leader Mikhail Gorbachev publicly proposed a three-stage program for abolishing the world's nuclear weapons by the end of the 20th century. In the years after the end of the Cold War, there have been numerous campaigns to urge the abolition of nuclear weapons, such as that organized by the Global Zero movement, and the goal of a "world without nuclear weapons" was advocated by United States President Barack Obama in an April 2009 speech in Prague. A CNN poll from April 2010 indicated that the American public was nearly evenly split on the issue. Some analysts have argued that nuclear weapons have made the world relatively safer, with peace through deterrence and through the stability–instability paradox, including in south Asia. Kenneth Waltz has argued that nuclear weapons have helped keep an uneasy peace, and further nuclear weapon proliferation might even help avoid the large scale conventional wars that were so common before their invention at the end of World War II. But former Secretary Henry Kissinger says there is a new danger, which cannot be addressed by deterrence: "The classical notion of deterrence was that there was some consequences before which aggressors and evildoers would recoil. In a world of suicide bombers, that calculation doesn't operate in any comparable way". George Shultz has said, "If you think of the people who are doing suicide attacks, and people like that get a nuclear weapon, they are almost by definition not deterrable". As of early 2019, more than 90% of world's 13,865 nuclear weapons were owned by Russia and the United States. United Nations The UN Office for Disarmament Affairs (UNODA) is a department of the United Nations Secretariat established in January 1998 as part of the United Nations Secretary-General Kofi Annan's plan to reform the UN as presented in his report to the General Assembly in July 1997. Its goal is to promote nuclear disarmament and non-proliferation and the strengthening of the disarmament regimes in respect to other weapons of mass destruction, chemical and biological weapons. It also promotes disarmament efforts in the area of conventional weapons, especially land mines and small arms, which are often the weapons of choice in contemporary conflicts. Controversy Ethics Even before the first nuclear weapons had been developed, scientists involved with the Manhattan Project were divided over the use of the weapon. The role of the two atomic bombings of the country in Japan's surrender and the US's ethical justification for them has been the subject of scholarly and popular debate for decades. The question of whether nations should have nuclear weapons, or test them, has been continually and nearly universally controversial. Notable nuclear weapons accidents August 21, 1945: While conducting experiments on a plutonium-gallium core at Los Alamos National Laboratory, physicist Harry Daghlian received a lethal dose of radiation when an error caused it to enter prompt criticality. He died 25 days later, on September 15, 1945, from radiation poisoning. May 21, 1946: While conducting further experiments on the same core at Los Alamos National Laboratory, physicist Louis Slotin accidentally caused the core to become briefly supercritical. He received a lethal dose of gamma and neutron radiation, and died nine days later on May 30, 1946. After the death of Daghlian and Slotin, the mass became known as the "demon core". It was ultimately used to construct a bomb for use on the Nevada Test Range. February 13, 1950: a Convair B-36B crashed in northern British Columbia after jettisoning a Mark IV atomic bomb. This was the first such nuclear weapon loss in history. The accident was designated a "Broken Arrow"—an accident involving a nuclear weapon, but which does not present a risk of war. Experts believe that up to 50 nuclear weapons were lost during the Cold War. May 22, 1957: a Mark-17 hydrogen bomb accidentally fell from a bomber near Albuquerque, New Mexico. The detonation of the device's conventional explosives destroyed it on impact and formed a crater in diameter on land owned by the University of New Mexico. According to a researcher at the Natural Resources Defense Council, it was one of the most powerful bombs made to date. June 7, 1960: the 1960 Fort Dix IM-99 accident destroyed a Boeing CIM-10 Bomarc nuclear missile and shelter and contaminated the BOMARC Missile Accident Site in New Jersey. January 24, 1961: the 1961 Goldsboro B-52 crash occurred near Goldsboro, North Carolina. A Boeing B-52 Stratofortress carrying two Mark 39 nuclear bombs broke up in mid-air, dropping its nuclear payload in the process. 1965 Philippine Sea A-4 crash, where a Skyhawk attack aircraft with a nuclear weapon fell into the sea. The pilot, the aircraft, and the B43 nuclear bomb were never recovered. It was not until 1989 that the Pentagon revealed the loss of the one-megaton bomb. January 17, 1966: the 1966 Palomares B-52 crash occurred when a B-52G bomber of the USAF collided with a KC-135 tanker during mid-air refuelling off the coast of Spain. The KC-135 was completely destroyed when its fuel load ignited, killing all four crew members. The B-52G broke apart, killing three of the seven crew members aboard. Of the four Mk28 type hydrogen bombs the B-52G carried, three were found on land near Almería, Spain. The non-nuclear explosives in two of the weapons detonated upon impact with the ground, resulting in the contamination of a (0.78 square mile) area by radioactive plutonium. The fourth, which fell into the Mediterranean Sea, was recovered intact after a 2-month-long search. January 21, 1968: the 1968 Thule Air Base B-52 crash involved a United States Air Force (USAF) B-52 bomber. The aircraft was carrying four hydrogen bombs when a cabin fire forced the crew to abandon the aircraft. Six crew members ejected safely, but one who did not have an ejection seat was killed while trying to bail out. The bomber crashed onto sea ice in Greenland, causing the nuclear payload to rupture and disperse, which resulted in widespread radioactive contamination. One of the bombs remains lost. September 18–19, 1980: the Damascus Accident occurred in Damascus, Arkansas, where a Titan Missile equipped with a nuclear warhead exploded. The accident was caused by a maintenance man who dropped a socket from a socket wrench down an shaft, puncturing a fuel tank on the rocket. Leaking fuel resulted in a hypergolic fuel explosion, jettisoning the W-53 warhead beyond the launch site. Nuclear testing and fallout Over 500 atmospheric nuclear weapons tests were conducted at various sites around the world from 1945 to 1980. Radioactive fallout from nuclear weapons testing was first drawn to public attention in 1954 when the Castle Bravo hydrogen bomb test at the Pacific Proving Grounds contaminated the crew and catch of the Japanese fishing boat Lucky Dragon. One of the fishermen died in Japan seven months later, and the fear of contaminated tuna led to a temporary boycotting of the popular staple in Japan. The incident caused widespread concern around the world, especially regarding the effects of nuclear fallout and atmospheric nuclear testing, and "provided a decisive impetus for the emergence of the anti-nuclear weapons movement in many countries". As public awareness and concern mounted over the possible health hazards associated with exposure to the nuclear fallout, various studies were done to assess the extent of the hazard. A Centers for Disease Control and Prevention/ National Cancer Institute study claims that fallout from atmospheric nuclear tests would lead to perhaps 11,000 excess deaths among people alive during atmospheric testing in the United States from all forms of cancer, including leukemia, from 1951 to well into the 21st century. , the US is the only nation that compensates nuclear test victims. Since the Radiation Exposure Compensation Act of 1990, more than $1.38 billion in compensation has been approved. The money is going to people who took part in the tests, notably at the Nevada Test Site, and to others exposed to the radiation. In addition, leakage of byproducts of nuclear weapon production into groundwater has been an ongoing issue, particularly at the Hanford site. Effects of nuclear explosions Effects of nuclear explosions on human health Some scientists estimate that a nuclear war with 100 Hiroshima-size nuclear explosions on cities could cost the lives of tens of millions of people from long-term climatic effects alone. The climatology hypothesis is that if each city firestorms, a great deal of soot could be thrown up into the atmosphere which could blanket the earth, cutting out sunlight for years on end, causing the disruption of food chains, in what is termed a nuclear winter. People near the Hiroshima explosion and who managed to survive the explosion subsequently suffered a variety of horrible medical effects. Some of these effects are still present to this day: Initial stage—the first 1–9 weeks, in which are the greatest number of deaths, with 90% due to thermal injury or blast effects and 10% due to super-lethal radiation exposure. Intermediate stage—from 10 to 12 weeks. The deaths in this period are from ionizing radiation in the median lethal range – LD50 Late period—lasting from 13 to 20 weeks. This period has some improvement in survivors' condition. Delayed period—from 20+ weeks. Characterized by numerous complications, mostly related to healing of thermal and mechanical injuries, and if the individual was exposed to a few hundred to a thousand millisieverts of radiation, it is coupled with infertility, sub-fertility and blood disorders. Furthermore, ionizing radiation above a dose of around 50–100 millisievert exposure has been shown to statistically begin increasing one's chance of dying of cancer sometime in their lifetime over the normal unexposed rate of ~25%, in the long term, a heightened rate of cancer, proportional to the dose received, would begin to be observed after ~5+ years, with lesser problems such as eye cataracts and other more minor effects in other organs and tissue also being observed over the long term. Fallout exposure—depending on if further afield individuals shelter in place or evacuate perpendicular to the direction of the wind, and therefore avoid contact with the fallout plume, and stay there for the days and weeks after the nuclear explosion, their exposure to fallout, and therefore their total dose, will vary. With those who do shelter in place, and or evacuate, experiencing a total dose that would be negligible in comparison to someone who just went about their life as normal. Staying indoors until after the most hazardous fallout isotope, I-131 decays away to 0.1% of its initial quantity after ten half-lifes—which is represented by 80 days in I-131s case, would make the difference between likely contracting Thyroid cancer or escaping completely from this substance depending on the actions of the individual. Effects of nuclear war Nuclear war could yield unprecedented human death tolls and habitat destruction. Detonating large numbers of nuclear weapons would have an immediate, short term and long-term effects on the climate, potentially causing cold weather known as a "nuclear winter". In 1982, Brian Martin estimated that a US–Soviet nuclear exchange might kill 400–450 million directly, mostly in the United States, Europe and Russia, and maybe several hundred million more through follow-up consequences in those same areas. Many scholars have posited that a global thermonuclear war with Cold War-era stockpiles, or even with the current smaller stockpiles, may lead to the extinction of the human race. The International Physicians for the Prevention of Nuclear War believe that nuclear war could indirectly contribute to human extinction via secondary effects, including environmental consequences, societal breakdown, and economic collapse. It has been estimated that a relatively small-scale nuclear exchange between India and Pakistan involving 100 Hiroshima yield (15 kilotons) weapons, could cause a nuclear winter and kill more than a billion people. According to a peer-reviewed study published in the journal Nature Food in August 2022, a full-scale nuclear war between the US and Russia would directly kill 360 million people and more than 5 billion people would die from starvation. More than 2 billion people could die from a smaller-scale nuclear war between India and Pakistan. Public opposition Peace movements emerged in Japan and in 1954 they converged to form a unified "Japan Council against Atomic and Hydrogen Bombs." Japanese opposition to nuclear weapons tests in the Pacific Ocean was widespread, and "an estimated 35 million signatures were collected on petitions calling for bans on nuclear weapons". In the United Kingdom, the Aldermaston Marches organised by the Campaign for Nuclear Disarmament (CND) took place at Easter 1958, when, according to the CND, several thousand people marched for four days from Trafalgar Square, London, to the Atomic Weapons Research Establishment close to Aldermaston in Berkshire, England, to demonstrate their opposition to nuclear weapons. The Aldermaston marches continued into the late 1960s when tens of thousands of people took part in the four-day marches. In 1959, a letter in the Bulletin of the Atomic Scientists was the start of a successful campaign to stop the Atomic Energy Commission dumping radioactive waste in the sea 19 kilometres from Boston. In 1962, Linus Pauling won the Nobel Peace Prize for his work to stop the atmospheric testing of nuclear weapons, and the "Ban the Bomb" movement spread. In 1963, many countries ratified the Partial Test Ban Treaty prohibiting atmospheric nuclear testing. Radioactive fallout became less of an issue and the anti-nuclear weapons movement went into decline for some years. A resurgence of interest occurred amid European and American fears of nuclear war in the 1980s. Costs and technology spin-offs According to an audit by the Brookings Institution, between 1940 and 1996, the US spent $ in present-day terms on nuclear weapons programs. 57% of which was spent on building nuclear weapons delivery systems. 6.3% of the total$, in present-day terms, was spent on environmental remediation and nuclear waste management, for example cleaning up the Hanford site, and 7% of the total$, was spent on making nuclear weapons themselves. Non-weapons uses Peaceful nuclear explosions are nuclear explosions conducted for non-military purposes, such as activities related to economic development including the creation of canals. During the 1960s and 1970s, both the United States and the Soviet Union conducted a number of PNEs. The United States created plans for several uses of PNEs, including Operation Plowshare. Six of the explosions by the Soviet Union are considered to have been of an applied nature, not just tests. The United States and the Soviet Union later halted their programs. Definitions and limits are covered in the Peaceful Nuclear Explosions Treaty of 1976. The stalled Comprehensive Nuclear-Test-Ban Treaty of 1996 would prohibit all nuclear explosions, regardless of whether they are for peaceful purposes or not. History of development
Technology
Weapons of mass destruction
null
21830
https://en.wikipedia.org/wiki/Nature
Nature
Nature is an inherent character or constitution, particularly of the ecosphere or the universe as a whole. In this general sense nature refers to the laws, elements and phenomena of the physical world, including life. Although humans are part of nature, human activity or humans as a whole are often described as at times at odds, or outright separate and even superior to nature. During the advent of modern scientific method in the last several centuries, nature became the passive reality, organized and moved by divine laws. With the Industrial Revolution, nature increasingly became seen as the part of reality deprived from intentional intervention: it was hence considered as sacred by some traditions (Rousseau, American transcendentalism) or a mere decorum for divine providence or human history (Hegel, Marx). However, a vitalist vision of nature, closer to the pre-Socratic one, got reborn at the same time, especially after Charles Darwin. Within the various uses of the word today, "nature" often refers to geology and wildlife. Nature can refer to the general realm of living beings, and in some cases to the processes associated with inanimate objects—the way that particular types of things exist and change of their own accord, such as the weather and geology of the Earth. It is often taken to mean the "natural environment" or wilderness—wild animals, rocks, forest, and in general those things that have not been substantially altered by human intervention, or which persist despite human intervention. For example, manufactured objects and human interaction generally are not considered part of nature, unless qualified as, for example, "human nature" or "the whole of nature". This more traditional concept of natural things that can still be found today implies a distinction between the natural and the artificial, with the artificial being understood as that which has been brought into being by a human consciousness or a human mind. Depending on the particular context, the term "natural" might also be distinguished from the unnatural or the supernatural. Etymology The word nature is borrowed from the Old French nature and is derived from the Latin word natura, or "essential qualities, innate disposition", and in ancient times, literally meant "birth". In ancient philosophy, natura is mostly used as the Latin translation of the Greek word physis (φύσις), which originally related to the intrinsic characteristics of plants, animals, and other features of the world to develop of their own accord. The concept of nature as a whole, the physical universe, is one of several expansions of the original notion; it began with certain core applications of the word φύσις by pre-Socratic philosophers (though this word had a dynamic dimension then, especially for Heraclitus), and has steadily gained currency ever since. Earth Earth is the only planet known to support life, and its natural features are the subject of many fields of scientific research. Within the Solar System, it is third closest to the Sun; it is the largest terrestrial planet and the fifth largest overall. Its most prominent climatic features are its two large polar regions, two relatively narrow temperate zones, and a wide equatorial tropical to subtropical region. Precipitation varies widely with location, from several metres of water per year to less than a millimetre. 71 percent of the Earth's surface is covered by salt-water oceans. The remainder consists of continents and islands, with most of the inhabited land in the Northern Hemisphere. Earth has evolved through geological and biological processes that have left traces of the original conditions. The outer surface is divided into several gradually migrating tectonic plates. The interior remains active, with a thick layer of plastic mantle and an iron-filled core that generates a magnetic field. This iron core is composed of a solid inner phase, and a fluid outer phase. Convective motion in the core generates electric currents through dynamo action, and these, in turn, generate the geomagnetic field. The atmospheric conditions have been significantly altered from the original conditions by the presence of life-forms, which create an ecological balance that stabilizes the surface conditions. Despite the wide regional variations in climate by latitude and other geographic factors, the long-term average global climate is quite stable during interglacial periods, and variations of a degree or two of average global temperature have historically had major effects on the ecological balance, and on the actual geography of the Earth. Geology Geology is the science and study of the solid and liquid matter that constitutes the Earth. The field of geology encompasses the study of the composition, structure, physical properties, dynamics, and history of Earth materials, and the processes by which they are formed, moved, and changed. The field is a major academic discipline, and is also important for mineral and hydrocarbon extraction, knowledge about and mitigation of natural hazards, some Geotechnical engineering fields, and understanding past climates and environments. Geological evolution The geology of an area evolves through time as rock units are deposited and inserted and deformational processes change their shapes and locations. Rock units are first emplaced either by deposition onto the surface or intrude into the overlying rock. Deposition can occur when sediments settle onto the surface of the Earth and later lithify into sedimentary rock, or when as volcanic material such as volcanic ash or lava flows, blanket the surface. Igneous intrusions such as batholiths, laccoliths, dikes, and sills, push upwards into the overlying rock, and crystallize as they intrude. After the initial sequence of rocks has been deposited, the rock units can be deformed and/or metamorphosed. Deformation typically occurs as a result of horizontal shortening, horizontal extension, or side-to-side (strike-slip) motion. These structural regimes broadly relate to convergent boundaries, divergent boundaries, and transform boundaries, respectively, between tectonic plates. Historical perspective Earth is estimated to have formed 4.54 billion years ago from the solar nebula, along with the Sun and other planets. The Moon formed roughly 20 million years later. Initially molten, the outer layer of the Earth cooled, resulting in the solid crust. Outgassing and volcanic activity produced the primordial atmosphere. Condensing water vapor, most or all of which came from ice delivered by comets, produced the oceans and other water sources. The highly energetic chemistry is believed to have produced a self-replicating molecule around 4 billion years ago. Continents formed, then broke up and reformed as the surface of Earth reshaped over hundreds of millions of years, occasionally combining to make a supercontinent. Roughly 750 million years ago, the earliest known supercontinent Rodinia, began to break apart. The continents later recombined to form Pannotia which broke apart about 540 million years ago, then finally Pangaea, which broke apart about 180 million years ago. During the Neoproterozoic era, freezing temperatures covered much of the Earth in glaciers and ice sheets. This hypothesis has been termed the "Snowball Earth", and it is of particular interest as it precedes the Cambrian explosion in which multicellular life forms began to proliferate about 530–540 million years ago. Since the Cambrian explosion there have been five distinctly identifiable mass extinctions. The last mass extinction occurred some 66 million years ago, when a meteorite collision probably triggered the extinction of the non-avian dinosaurs and other large reptiles, but spared small animals such as mammals. Over the past 66 million years, mammalian life diversified. Several million years ago, a species of small African ape gained the ability to stand upright. The subsequent advent of human life, and the development of agriculture and further civilization allowed humans to affect the Earth more rapidly than any previous life form, affecting both the nature and quantity of other organisms as well as global climate. By comparison, the Great Oxygenation Event, produced by the proliferation of algae during the Siderian period, required about 300 million years to culminate. The present era is classified as part of a mass extinction event, the Holocene extinction event, the fastest ever to have occurred. Some, such as E. O. Wilson of Harvard University, predict that human destruction of the biosphere could cause the extinction of one-half of all species in the next 100 years. The extent of the current extinction event is still being researched, debated and calculated by biologists. Atmosphere, climate, and weather The Earth's atmosphere is a key factor in sustaining the ecosystem. The thin layer of gases that envelops the Earth is held in place by gravity. Air is mostly nitrogen, oxygen, water vapor, with much smaller amounts of carbon dioxide, argon, etc. The atmospheric pressure declines steadily with altitude. The ozone layer plays an important role in depleting the amount of ultraviolet (UV) radiation that reaches the surface. As DNA is readily damaged by UV light, this serves to protect life at the surface. The atmosphere also retains heat during the night, thereby reducing the daily temperature extremes. Terrestrial weather occurs almost exclusively in the lower part of the atmosphere, and serves as a convective system for redistributing heat. Ocean currents are another important factor in determining climate, particularly the major underwater thermohaline circulation which distributes heat energy from the equatorial oceans to the polar regions. These currents help to moderate the differences in temperature between winter and summer in the temperate zones. Also, without the redistributions of heat energy by the ocean currents and atmosphere, the tropics would be much hotter, and the polar regions much colder. Weather can have both beneficial and harmful effects. Extremes in weather, such as tornadoes or hurricanes and cyclones, can expend large amounts of energy along their paths, and produce devastation. Surface vegetation has evolved a dependence on the seasonal variation of the weather, and sudden changes lasting only a few years can have a dramatic effect, both on the vegetation and on the animals which depend on its growth for their food. Climate is a measure of the long-term trends in the weather. Various factors are known to influence the climate, including ocean currents, surface albedo, greenhouse gases, variations in the solar luminosity, and changes to the Earth's orbit. Based on historical and geological records, the Earth is known to have undergone drastic climate changes in the past, including ice ages. The climate of a region depends on a number of factors, especially latitude. A latitudinal band of the surface with similar climatic attributes forms a climate region. There are a number of such regions, ranging from the tropical climate at the equator to the polar climate in the northern and southern extremes. Weather is also influenced by the seasons, which result from the Earth's axis being tilted relative to its orbital plane. Thus, at any given time during the summer or winter, one part of the Earth is more directly exposed to the rays of the sun. This exposure alternates as the Earth revolves in its orbit. At any given time, regardless of season, the Northern and Southern Hemispheres experience opposite seasons. Weather is a chaotic system that is readily modified by small changes to the environment, so accurate weather forecasting is limited to only a few days. Overall, two things are happening worldwide: (1) temperature is increasing on the average; and (2) regional climates have been undergoing noticeable changes. Water on Earth Water is a chemical substance that is composed of hydrogen and oxygen (H2O) and is vital for all known forms of life. In typical usage, "water" refers only to its liquid form, but it also has a solid state, ice, and a gaseous state, water vapor, or steam. Water covers 71% of the Earth's surface. On Earth, it is found mostly in oceans and other large bodies of water, with 1.6% of water below ground in aquifers and 0.001% in the air as vapor, clouds, and precipitation. Oceans hold 97% of surface water, glaciers, and polar ice caps 2.4%, and other land surface water such as rivers, lakes, and ponds 0.6%. Additionally, a minute amount of the Earth's water is contained within biological bodies and manufactured products. Oceans An ocean is a major body of saline water, and a principal component of the hydrosphere. Approximately 71% of the Earth's surface (an area of some 361 million square kilometers) is covered by ocean, a continuous body of water that is customarily divided into several principal oceans and smaller seas. More than half of this area is over deep. Average oceanic salinity is around 35 parts per thousand (ppt) (3.5%), and nearly all seawater has a salinity in the range of 30 to 38 ppt. Though generally recognized as several 'separate' oceans, these waters comprise one global, interconnected body of salt water often referred to as the World Ocean or global ocean. This concept of a global ocean as a continuous body of water with relatively free interchange among its parts is of fundamental importance to oceanography. The major oceanic divisions are defined in part by the continents, various archipelagos, and other criteria: these divisions are (in descending order of size) the Pacific Ocean, the Atlantic Ocean, the Indian Ocean, the Southern Ocean, and the Arctic Ocean. Smaller regions of the oceans are called seas, gulfs, bays and other names. There are also salt lakes, which are smaller bodies of landlocked saltwater that are not interconnected with the World Ocean. Two notable examples of salt lakes are the Aral Sea and the Great Salt Lake. Lakes A lake (from Latin word lacus) is a terrain feature (or physical feature), a body of liquid on the surface of a world that is localized to the bottom of basin (another type of landform or terrain feature; that is, it is not global) and moves slowly if it moves at all. On Earth, a body of water is considered a lake when it is inland, not part of the ocean, is larger and deeper than a pond, and is fed by a river. The only world other than Earth known to harbor lakes is Titan, Saturn's largest moon, which has lakes of ethane, most likely mixed with methane. It is not known if Titan's lakes are fed by rivers, though Titan's surface is carved by numerous river beds. Natural lakes on Earth are generally found in mountainous areas, rift zones, and areas with ongoing or recent glaciation. Other lakes are found in endorheic basins or along the courses of mature rivers. In some parts of the world, there are many lakes because of chaotic drainage patterns left over from the last ice age. All lakes are temporary over geologic time scales, as they will slowly fill in with sediments or spill out of the basin containing them. Ponds A pond is a body of standing water, either natural or human-made, that is usually smaller than a lake. A wide variety of human-made bodies of water are classified as ponds, including water gardens designed for aesthetic ornamentation, fish ponds designed for commercial fish breeding, and solar ponds designed to store thermal energy. Ponds and lakes are distinguished from streams via current speed. While currents in streams are easily observed, ponds and lakes possess thermally driven micro-currents and moderate wind driven currents. These features distinguish a pond from many other aquatic terrain features, such as stream pools and tide pools. Rivers A river is a natural watercourse, usually freshwater, flowing towards an ocean, a lake, a sea or another river. In a few cases, a river simply flows into the ground or dries up completely before reaching another body of water. Small rivers may also be called by several other names, including stream, creek, brook, rivulet, and rill; there is no general rule that defines what can be called a river. Many names for small rivers are specific to geographic location; one example is Burn in Scotland and North-east England. Sometimes a river is said to be larger than a creek, but this is not always the case, due to vagueness in the language. A river is part of the hydrological cycle. Water within a river is generally collected from precipitation through surface runoff, groundwater recharge, springs, and the release of stored water in natural ice and snowpacks (i.e., from glaciers). Streams A stream is a flowing body of water with a current, confined within a bed and stream banks. In the United States, a stream is classified as a watercourse less than wide. Streams are important as conduits in the water cycle, instruments in groundwater recharge, and they serve as corridors for fish and wildlife migration. The biological habitat in the immediate vicinity of a stream is called a riparian zone. Given the status of the ongoing Holocene extinction, streams play an important corridor role in connecting fragmented habitats and thus in conserving biodiversity. The study of streams and waterways in general involves many branches of inter-disciplinary natural science and engineering, including hydrology, fluvial geomorphology, aquatic ecology, fish biology, riparian ecology, and others. Ecosystems Ecosystems are composed of a variety of biotic and abiotic components that function in an interrelated way. The structure and composition is determined by various environmental factors that are interrelated. Variations of these factors will initiate dynamic modifications to the ecosystem. Some of the more important components are soil, atmosphere, radiation from the sun, water, and living organisms. Central to the ecosystem concept is the idea that living organisms interact with every other element in their local environment. Eugene Odum, a founder of ecology, stated: "Any unit that includes all of the organisms (ie: the "community") in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e.: exchange of materials between living and nonliving parts) within the system is an ecosystem." Within the ecosystem, species are connected and dependent upon one another in the food chain, and exchange energy and matter between themselves as well as with their environment. The human ecosystem concept is based on the human/nature dichotomy and the idea that all species are ecologically dependent on each other, as well as with the abiotic constituents of their biotope. A smaller unit of size is called a microecosystem. For example, a microsystem can be a stone and all the life under it. A macroecosystem might involve a whole ecoregion, with its drainage basin. Wilderness Wilderness is generally defined as areas that have not been significantly modified by human activity. Wilderness areas can be found in preserves, estates, farms, conservation preserves, ranches, national forests, national parks, and even in urban areas along rivers, gulches, or otherwise undeveloped areas. Wilderness areas and protected parks are considered important for the survival of certain species, ecological studies, conservation, and solitude. Some nature writers believe wilderness areas are vital for the human spirit and creativity, and some ecologists consider wilderness areas to be an integral part of the Earth's self-sustaining natural ecosystem (the biosphere). They may also preserve historic genetic traits and that they provide habitat for wild flora and fauna that may be difficult or impossible to recreate in zoos, arboretums, or laboratories. Life Although there is no universal agreement on the definition of life, scientists generally accept that the biological manifestation of life is characterized by organization, metabolism, growth, adaptation, response to stimuli, and reproduction. Life may also be said to be simply the characteristic state of organisms. Present day organisms from viruses to humans possess a self-replicating informational molecule (genome), either DNA or RNA (as in some viruses), and such an informational molecule is probably intrinsic to life. It is likely that the earliest forms of life were based on a self-replicating informational molecule (genome), perhaps RNA or a molecule more primitive than RNA or DNA. The specific deoxyribonucleotide/ribonucleotide sequence in each currently extant individual organism contains sequence information that functions to promotes survival, reproduction, and the capacity to acquire resources necessary for reproduction, and such sequences probably emerged early in the evolution of life. Survival functions present early in the evolution of life likely also included genomic sequences that promote the avoidance of damage to the self-replicating molecule and also the capability to repair such damages that do occur. Repair of some genome damages may have involved using information from another similar molecule by a process of recombination (a primitive form of sexual interaction). Properties common to terrestrial organisms (plants, animals, fungi, protists, archaea, and bacteria) are that they are cellular, carbon-and-water-based with complex organization, having a metabolism, a capacity to grow, respond to stimuli, and reproduce. An entity with these properties is generally considered life. However, not every definition of life considers all of these properties to be essential. Human-made analogs of life may also be considered to be life. The biosphere is the part of Earth's outer shell—including land, surface rocks, water, air and the atmosphere—within which life occurs, and which biotic processes in turn alter or transform. From the broadest geophysiological point of view, the biosphere is the global ecological system integrating all living beings and their relationships, including their interaction with the elements of the lithosphere (rocks), hydrosphere (water), and atmosphere (air). The entire Earth contains over 75 billion tons (150 trillion pounds or about 6.8×1013 kilograms) of biomass (life), which lives within various environments within the biosphere. Over nine-tenths of the total biomass on Earth is plant life, on which animal life depends very heavily for its existence. More than 2 million species of plant and animal life have been identified to date, and estimates of the actual number of existing species range from several million to well over 50 million. The number of individual species of life is constantly in some degree of flux, with new species appearing and others ceasing to exist on a continual basis. The total number of species is in rapid decline. Evolution The origin of life on Earth is not well understood, but it is known to have occurred at least 3.5 billion years ago, during the hadean or archean eons on a primordial Earth that had a substantially different environment than is found at present. These life forms possessed the basic traits of self-replication and inheritable traits. Once life had appeared, the process of evolution by natural selection resulted in the development of ever-more diverse life forms. Species that were unable to adapt to the changing environment and competition from other life forms became extinct. However, the fossil record retains evidence of many of these older species. Current fossil and DNA evidence shows that all existing species can trace a continual ancestry back to the first primitive life forms. When basic forms of plant life developed the process of photosynthesis the sun's energy could be harvested to create conditions which allowed for more complex life forms. The resultant oxygen accumulated in the atmosphere and gave rise to the ozone layer. The incorporation of smaller cells within larger ones resulted in the development of yet more complex cells called eukaryotes. Cells within colonies became increasingly specialized, resulting in true multicellular organisms. With the ozone layer absorbing harmful ultraviolet radiation, life colonized the surface of Earth. Microbes The first form of life to develop on the Earth were unicellular, and they remained the only form of life until about a billion years ago when multi-cellular organisms began to appear. Microorganisms or microbes are microscopic, and smaller than the human eye can see. Microorganisms can be single-celled, such as Bacteria, Archaea, many Protista, and a minority of Fungi. These life forms are found in almost every location on the Earth where there is liquid water, including in the Earth's interior. Their reproduction is both rapid and profuse. The combination of a high mutation rate and a horizontal gene transfer ability makes them highly adaptable, and able to survive in new and sometimes very harsh environments, including outer space. They form an essential part of the planetary ecosystem. However, some microorganisms are pathogenic and can post health risk to other organisms. Viruses are infectious agents, but they are not autonomous life forms, as it is the case for viroids, satellites, DPIs and prions. Plants and animals Originally Aristotle divided all living things between plants, which generally do not move fast enough for humans to notice, and animals. In Linnaeus' system, these became the kingdoms Vegetabilia (later Plantae) and Animalia. Since then, it has become clear that the Plantae as originally defined included several unrelated groups, and the fungi and several groups of algae were removed to new kingdoms. However, these are still often considered plants in many contexts. Bacterial life is sometimes included in flora, and some classifications use the term bacterial flora separately from plant flora. Among the many ways of classifying plants are by regional floras, which, depending on the purpose of study, can also include fossil flora, remnants of plant life from a previous era. People in many regions and countries take great pride in their individual arrays of characteristic flora, which can vary widely across the globe due to differences in climate and terrain. Regional floras commonly are divided into categories such as native flora or agricultural and garden flora. Some types of "native flora" actually have been introduced centuries ago by people migrating from one region or continent to another, and become an integral part of the native, or natural flora of the place to which they were introduced. This is an example of how human interaction with nature can blur the boundary of what is considered nature. Another category of plant has historically been carved out for weeds. Though the term has fallen into disfavor among botanists as a formal way to categorize "useless" plants, the informal use of the word "weeds" to describe those plants that are deemed worthy of elimination is illustrative of the general tendency of people and societies to seek to alter or shape the course of nature. Similarly, animals are often categorized in ways such as domestic, farm animals, wild animals, pests, etc. according to their relationship to human life. Animals as a category have several characteristics that generally set them apart from other living things. Animals are eukaryotic and usually multicellular, which separates them from bacteria, archaea, and most protists. They are heterotrophic, generally digesting food in an internal chamber, which separates them from plants and algae. They are also distinguished from plants, algae, and fungi by lacking cell walls. With a few exceptions—most notably the two phyla consisting of sponges and placozoans—animals have bodies that are differentiated into tissues. These include muscles, which are able to contract and control locomotion, and a nervous system, which sends and processes signals. There is also typically an internal digestive chamber. The eukaryotic cells possessed by all animals are surrounded by a characteristic extracellular matrix composed of collagen and elastic glycoproteins. This may be calcified to form structures like shells, bones, and spicules, a framework upon which cells can move about and be reorganized during development and maturation, and which supports the complex anatomy required for mobility. Human interrelationship Human impact Although humans comprise a minuscule proportion of the total living biomass on Earth, the human effect on nature is disproportionately large. Because of the extent of human influence, the boundaries between what humans regard as nature and "made environments" is not clear cut except at the extremes. Even at the extremes, the amount of natural environment that is free of discernible human influence is diminishing at an increasingly rapid pace. A 2020 study published in Nature found that anthropogenic mass (human-made materials) outweighs all living biomass on earth, with plastic alone exceeding the mass of all land and marine animals combined. And according to a 2021 study published in Frontiers in Forests and Global Change, only about 3% of the planet's terrestrial surface is ecologically and faunally intact, with a low human footprint and healthy populations of native animal species. Philip Cafaro, professor of philosophy at the School of Global Environmental Sustainability at Colorado State University, wrote in 2022 that "the cause of global biodiversity loss is clear: other species are being displaced by a rapidly growing human economy." The development of technology by the human race has allowed the greater exploitation of natural resources and has helped to alleviate some of the risk from natural hazards. In spite of this progress, however, the fate of human civilization remains closely linked to changes in the environment. There exists a highly complex feedback loop between the use of advanced technology and changes to the environment that are only slowly becoming understood. Human-made threats to the Earth's natural environment include pollution, deforestation, and disasters such as oil spills. Humans have contributed to the extinction of many plants and animals, with roughly 1 million species threatened with extinction within decades. The loss of biodiversity and ecosystem functions over the last half century have impacted the extent that nature can contribute to human quality of life, and continued declines could pose a major threat to the existence of human civilization, unless a rapid course correction is made. The value of natural resources to human society is poorly reflected in market prices because except for labour costs the natural resources are available free of charge. This distorts market pricing of natural resources and at the same time leads to underinvestment in our natural assets. The annual global cost of public subsidies that damage nature is conservatively estimated at $4–6 trillion (million million). Institutional protections of these natural goods, such as the oceans and rainforests, are lacking. Governments have not prevented these economic externalities. Humans employ nature for both leisure and economic activities. The acquisition of natural resources for industrial use remains a sizable component of the world's economic system. Some activities, such as hunting and fishing, are used for both sustenance and leisure, often by different people. Agriculture was first adopted around the 9th millennium BCE. Ranging from food production to energy, nature influences economic wealth. Although early humans gathered uncultivated plant materials for food and employed the medicinal properties of vegetation for healing, most modern human use of plants is through agriculture. The clearance of large tracts of land for crop growth has led to a significant reduction in the amount available of forestation and wetlands, resulting in the loss of habitat for many plant and animal species as well as increased erosion. Aesthetics and beauty Beauty in nature has historically been a prevalent theme in art and books, filling large sections of libraries and bookstores. That nature has been depicted and celebrated by so much art, photography, poetry, and other literature shows the strength with which many people associate nature and beauty. Reasons why this association exists, and what the association consists of, are studied by the branch of philosophy called aesthetics. Beyond certain basic characteristics that many philosophers agree about to explain what is seen as beautiful, the opinions are virtually endless. Nature and wildness have been important subjects in various eras of world history. An early tradition of landscape art began in China during the Tang Dynasty (618–907). The tradition of representing nature as it is became one of the aims of Chinese painting and was a significant influence in Asian art. Although natural wonders are celebrated in the Psalms and the Book of Job, wilderness portrayals in art became more prevalent in the 1800s, especially in the works of the Romantic movement. British artists John Constable and J. M. W. Turner turned their attention to capturing the beauty of the natural world in their paintings. Before that, paintings had been primarily of religious scenes or of human beings. William Wordsworth's poetry described the wonder of the natural world, which had formerly been viewed as a threatening place. Increasingly the valuing of nature became an aspect of Western culture. This artistic movement also coincided with the Transcendentalist movement in the Western world. A common classical idea of beautiful art involves the word mimesis, the imitation of nature. Also in the realm of ideas about beauty in nature is that the perfect is implied through perfect mathematical forms and more generally by patterns in nature. As David Rothenburg writes, "The beautiful is the root of science and the goal of art, the highest possibility that humanity can ever hope to see". Matter and energy The natural sciences view matter as obeying certain laws of nature which scientists seek to understand. Matter is commonly defined as the substance of which physical objects are composed. It constitutes the observable universe. The visible components of the universe are now believed to compose only 4.9 percent of the total mass. The remainder is believed to consist of 26.8 percent cold dark matter and 68.3 percent dark energy. The exact arrangement of these components is still unknown and is under intensive investigation by physicists. The behaviour of matter and energy throughout the observable universe appears to follow well-defined physical laws. These laws have been employed to produce cosmological models that successfully explain the structure and the evolution of the universe we can observe. The mathematical expressions of the laws of physics employ a set of twenty physical constants that appear to be static across the observable universe. The values of these constants have been carefully measured, but the reason for their specific values remains a mystery. Beyond Earth Outer space, also simply called space, refers to the relatively empty regions of the Universe outside the atmospheres of celestial bodies. Outer space is used to distinguish it from airspace (and terrestrial locations). There is no discrete boundary between Earth's atmosphere and space, as the atmosphere gradually attenuates with increasing altitude. Outer space within the Solar System is called interplanetary space, which passes over into interstellar space at what is known as the heliopause. Outer space is sparsely filled with several dozen types of organic molecules discovered to date by microwave spectroscopy, blackbody radiation left over from the Big Bang and the origin of the universe, and cosmic rays, which include ionized atomic nuclei and various subatomic particles. There is also some gas, plasma and dust, and small meteors. Additionally, there are signs of human life in outer space today, such as material left over from previous crewed and uncrewed launches which are a potential hazard to spacecraft. Some of this debris re-enters the atmosphere periodically. Although Earth is the only body within the Solar System known to support life, evidence suggests that in the distant past the planet Mars possessed bodies of liquid water on the surface. For a brief period in Mars' history, it may have also been capable of forming life. At present though, most of the water remaining on Mars is frozen. If life exists at all on Mars, it is most likely to be located underground where liquid water can still exist. Conditions on the other terrestrial planets, Mercury and Venus, appear to be too harsh to support life as we know it. But it has been conjectured that Europa, the fourth-largest moon of Jupiter, may possess a sub-surface ocean of liquid water and could potentially host life. Astronomers have started to discover extrasolar Earth analogs – planets that lie in the habitable zone of space surrounding a star, and therefore could possibly host life as we know it.
Physical sciences
Science: General
null
21833
https://en.wikipedia.org/wiki/New%20moon
New moon
In astronomy, the new moon is the first lunar phase, when the Moon and Sun have the same ecliptic longitude. At this phase, the lunar disk is not visible to the naked eye, except when it is silhouetted against the Sun during a solar eclipse. The original meaning of the term 'new moon', which is still sometimes used in calendrical, non-astronomical contexts, is the first visible crescent of the Moon after conjunction with the Sun. This thin waxing crescent is briefly and faintly visible as the Moon gets lower in the western sky after sunset. The precise time and even the date of the appearance of the new moon by this definition will be influenced by the geographical location of the observer. The first crescent marks the beginning of the month in the Islamic calendar and in some lunisolar calendars such as the Hebrew calendar. In the Chinese calendar, the beginning of the month is marked by the last visible crescent of a waning Moon. The astronomical new moon occurs by definition at the moment of conjunction in ecliptical longitude with the Sun when the Moon is invisible from the Earth. This moment is unique and does not depend on location, and in certain circumstances, it coincides with a solar eclipse. A lunation, or synodic month, is the period from one new moon to the next. At the J2000.0 epoch, the average length of a lunation is 29.53059 days (or 29 days, 12 hours, 44 minutes, and 3 seconds). However, the length of any one synodic month can vary from 29.26 to 29.80 days (12.96 hours) due to the perturbing effects of the Sun's gravity on the Moon's eccentric orbit. Lunation number The Lunation Number or Lunation Cycle is a number given to each lunation beginning from a specific one in history. Several conventions are in use. The most commonly used was the Brown Lunation Number (BLN), which defines "lunation 1" as beginning at the first new moon of 1923, the year when Ernest William Brown's lunar theory was introduced in the American Ephemeris and Nautical Almanac. Lunation 1 occurred at approximately 02:41 UTC, 17 January 1923. With later refinements, the BLN was used in almanacs until 1983. A more recent lunation number (called the Lunation Number) was introduced by Jean Meeus in 1998, and defines lunation 0 as beginning on the first new moon of 2000 (this occurred at approximately 18:14 UTC, 6 January 2000). The formula relating Meeus's Lunation Number with the Brown Lunation Number is BLN = LN + 953. The Goldstine Lunation Number refers to the lunation numbering used by Herman Goldstine, with lunation 0 beginning on 11 January 1001 BCE, and can be calculated using GLN = LN + 37105. The Hebrew Lunation Number is the count of lunations in the Hebrew calendar with lunation 1 beginning on 6 October 3761 BCE. It can be calculated using HLN = LN + 71234. The Islamic Lunation Number is the count of lunations in the Islamic Calendar with lunation 1 as beginning on the first day of the month of Muharram, which occurred in 622  CE (15 July, Julian, in the proleptic reckoning). It can be calculated using ILN = LN + 17038. The Thai Lunation Number is called "มาสเกณฑ์" (Maasa-Kendha), defines lunation 0 as the beginning of Burmese era of the Buddhist calendar on Sunday, 22 March 638 CE. It can be calculated using TLN = LN + 16843. Lunisolar calendars Hebrew calendar The new moon, in Hebrew Rosh Chodesh, signifies the start of every Hebrew month and is considered an important date and minor holiday in the Hebrew calendar. The modern form of the calendar practiced in Judaism is a rule-based lunisolar calendar, akin to the Chinese calendar, measuring months defined in lunar cycles as well as years measured in solar cycles, and distinct from the purely lunar Islamic calendar and the predominantly solar Gregorian calendar. The Jewish months are fixed to the annual seasons by setting the new moon of Aviv, the barley ripening, or spring, as the first moon and head of the year. Since the Babylonian captivity, this month is called Nisan, and it is calculated based on mathematical rules designed to ensure that festivals are observed in their traditional season. Passover always falls in the springtime. This fixed lunisolar calendar follows rules introduced by Hillel II and refined until the ninth century. This calculation makes use of a mean lunation length used by Ptolemy and handed down from Babylonians, which is still very accurate: ca. 29.530594 days vs. a present value (see below) of 29.530589 days. This difference of only 0.000005, or five millionths of a day, adds up to about only four hours since Babylonian times. Chinese calendar The new moon is the beginning of the month in the Chinese calendar. Some Buddhist Chinese keep a vegetarian diet on the new moon and full moon each month. Hindu calendar The new moon is significant in the lunar Hindu calendar. The first day of the calendar starts the day after the dark moon phase (Amavasya). There are fifteen moon dates for each of the waxing and waning periods. These fifteen dates are divided evenly into five categories: Nanda, Bhadra', Jaya, Rikta, and Purna, which are cycled through in that order. Nanda dates are considered to be favorable for auspicious works; Bhadra dates for works related to community, social, family, and friends; and Jaya dates for dealing with conflict. Rikta dates are considered beneficial only for works related to cruelty. Purna dates are considered to be favorable for all work. Babylonian calendar Lunar calendars Islamic calendar The lunar Hijri calendar has exactly 12 lunar months in a year of 354 or 355 days. It has retained an observational definition of the new moon, marking the new month when the first crescent moon is seen, and making it impossible to be certain in advance of when a specific month will begin (in particular, the exact date on which the month of Ramadan will begin is not known in advance). In Saudi Arabia, the new King Abdullah Centre for Crescent Observations and Astronomy in Mecca has a clock for addressing this as an international scientific project. In Pakistan, there is a "Central Ruet-e-Hilal Committee" whose head is Mufti Muneeb-ur-Rehman, assisted by 150 observatories of the Pakistan Meteorological Department, which announces the sighting of the new moon. An attempt to unify Muslims on a scientifically calculated worldwide calendar was adopted by both the Fiqh Council of North America and the European Council for Fatwa and Research in 2007. The new calculation requires that conjunction must occur before sunset in Mecca, Saudi Arabia, and that, on the same evening, the moonset must take place after sunset. These can be precisely calculated and therefore a unified calendar is possible should it become adopted worldwide. Solar calendars holding moveable feasts Baháʼí calendar The Baháʼí calendar is a solar calendar with certain new moons observed as moveable feasts. In the Baháʼí Faith, effective from 2015 onwards, the "Twin Holy Birthdays", refer to two successive holy days in the Baháʼí calendar (the birth of the Báb and the birth of Bahá'u'lláh), will be observed on the first and the second day following the occurrence of the eighth new moon after Naw-Rúz (Baháʼí New Year), as determined in advance by astronomical tables using Tehran as the point of reference. This will result in the observance of the Twin Birthdays moving, year to year, from mid-October to mid-November according to the Gregorian calendar. Christian liturgical calendar Easter, the most important feast in the Christian liturgical calendar, is a movable feast. The date of Easter is determined by reference to the ecclesiastical full moon, which, being historically difficult to determine with precision, is defined as being fourteen days after the (first crescent) new moon.
Physical sciences
Celestial mechanics
Astronomy
21837
https://en.wikipedia.org/wiki/Nanometre
Nanometre
The nanometre (international spelling as used by the International Bureau of Weights and Measures; SI symbol: nm), or nanometer (American spelling), is a unit of length in the International System of Units (SI), equal to one billionth (short scale) or one thousand million (long scale) of a meter (0.000000001 m) and to 1000 picometres. One nanometre can be expressed in scientific notation as 1 × 10−9 m and as  m. History The nanometre was formerly known as the "millimicrometre" – or, more commonly, the "millimicron" for short – since it is of a micrometer. It was often denoted by the symbol mμ or, more rarely, as μμ (however, μμ should refer to a millionth of a micron). Etymology The name combines the SI prefix nano- (from the Ancient Greek , , "dwarf") with the parent unit name metre (from Greek , , "unit of measurement"). Usage Nanotechnologies are based on physical processes which occur on a scale of nanometres (see nanoscopic scale). The nanometre is often used to express dimensions on an atomic scale: the diameter of a helium atom, for example, is about 0.06 nm, and that of a ribosome is about 20 nm. The nanometre is also commonly used to specify the wavelength of electromagnetic radiation near the visible part of the spectrum: visible light ranges from around 400 to 700 nm. The ångström, which is equal to 0.1 nm, was formerly used for these purposes. Since the late 1980s, in usages such as the 32 nm and the 22 nm semiconductor node, it has also been used to describe typical feature sizes in successive generations of the ITRS Roadmap for miniaturized semiconductor device fabrication in the semiconductor industry. Unicode The CJK Compatibility block in Unicode has the symbol .
Physical sciences
Metric
Basics and measurement
21848
https://en.wikipedia.org/wiki/Neurosurgery
Neurosurgery
Neurosurgery or neurological surgery, known in common parlance as brain surgery, is the medical specialty that focuses on the surgical treatment or rehabilitation of disorders which affect any portion of the nervous system including the brain, spinal cord, peripheral nervous system, and cerebrovascular system. Neurosurgery as a medical specialty also includes non-surgical management of some neurological conditions. Education and context In different countries, there are different requirements for an individual to legally practice neurosurgery, and there are varying methods through which they must be educated. In most countries, neurosurgeon training requires a minimum period of seven years after graduating from medical school. United Kingdom In the United Kingdom, students must gain entry into medical school. The MBBS qualification (Bachelor of Medicine, Bachelor of Surgery) takes four to six years depending on the student's route. The newly qualified physician must then complete foundation training lasting two years; this is a paid training program in a hospital or clinical setting covering a range of medical specialties including surgery. Junior doctors then apply to enter the neurosurgical pathway. Unlike most other surgical specialties, it currently has its own independent training pathway which takes around eight years (ST1-8); before being able to sit for consultant exams with sufficient amounts of experience and practice behind them. Neurosurgery remains consistently amongst the most competitive medical specialties in which to obtain entry. United States In the United States, a neurosurgeon must generally complete four years of undergraduate education, four years of medical school, and seven years of residency (PGY-1-7). Most, but not all, residency programs have some component of basic science or clinical research. Neurosurgeons may pursue additional training in the form of a fellowship after residency, or, in some cases, as a senior resident in the form of an enfolded fellowship. These fellowships include pediatric neurosurgery, trauma/neurocritical care, functional and stereotactic surgery, surgical neuro-oncology, radiosurgery, neurovascular surgery, skull-base surgery, peripheral nerve and complex spinal surgery. Fellowships typically span one to two years. In the U.S., neurosurgery is a very small, highly competitive specialty, constituting only 0.5 percent of all physicians. History Neurosurgery, or the premeditated incision into the head for pain relief, has been around for thousands of years, but notable advancements in neurosurgery have only come within the last hundred years. Ancient The Incas appear to have practiced a procedure known as trepanation since before European colonization. During the Middle Ages in Al-Andalus from 936 to 1013 AD, Al-Zahrawi performed surgical treatments of head injuries, skull fractures, spinal injuries, hydrocephalus, subdural effusions and headache. During the Roman Empire, doctors and surgeons performed neurosurgery on depressed skull fractures. Simple forms of neurosurgery were performed on King Henri II in 1559, after a jousting accident with Gabriel Montgomery fatally wounded him. Ambroise Paré and Andreas Vesalius, both experts in their field at the time, attempted their own methods, to no avail, in curing Henri. In China, Hua Tuo created the first general anaesthesia called mafeisan, which he used on surgical procedures on the brain. Modern History of tumor removal: In 1879, after locating it via neurological signs alone, Scottish surgeon William Macewen (1848–1924) performed the first successful brain tumor removal. On November 25, 1884, after English physician Alexander Hughes Bennett (1848–1901) used Macewen's technique to locate it, English surgeon Rickman Godlee (1849–1925) performed the first primary brain tumor removal, which differs from Macewen's operation in that Bennett operated on the exposed brain, whereas Macewen operated outside of the "brain proper" via trepanation. On March 16, 1907, Austrian surgeon Hermann Schloffer became the first to successfully remove a pituitary tumor. Lobotomy: also known as leucotomy, was a form of psychosurgery, a neurosurgical treatment of mental disorders that involves severing connections in the brain's prefrontal cortex. The originator of the procedure, Portuguese neurologist António Egas Moniz, shared the Nobel Prize for Physiology or Medicine of 1949. Some patients improved in some ways after the operation, but complications and impairmentssometimes severewere frequent. The procedure was controversial from its initial use, in part due to the balance between benefits and risks. It is mostly rejected as a treatment now and non-compliant with patients' rights. History of electrodes in the brain: In 1878, Richard Caton discovered that electrical signals transmitted through an animal's brain. In 1950 Jose Delgado invented the first electrode that was implanted in an animal's brain (bull), using it to make it run and change direction. In 1972 the cochlear implant, a neurological prosthetic that allowed deaf people to hear was marketed for commercial use. In 1998 researcher Philip Kennedy implanted the first Brain Computer Interface (BCI) into a human subject. A survey done in 2010 on 100 most cited works in neurosurgery shows that the works mainly cover clinical trials evaluating surgical and medical therapies, descriptions of novel techniques in neurosurgery, and descriptions of systems classifying and grading diseases. Modern surgical instruments The main advancements in neurosurgery came about as a result of highly crafted tools. Modern neurosurgical tools, or instruments, include chisels, curettes, dissectors, distractors, elevators, forceps, hooks, impactors, probes, suction tubes, power tools, and robots. Most of these modern tools have been in medical practice for a relatively long time. The main difference of these tools in neurosurgery, were the precision in which they were crafted. These tools are crafted with edges that are within a millimeter of desired accuracy. Other tools, such as hand held power saws and robots have only recently been commonly used inside of a neurological operating room. As an example, the University of Utah developed a device for computer-aided design / computer-aided manufacturing (CAD-CAM) which uses an image-guided system to define a cutting tool path for a robotic cranial drill. Organised neurosurgery The World Federation of Neurosurgical Societies (WFNS), founded in 1955, in Switzerland, as a professional, scientific, non governmental organization, is composed of 130 member societies: consisting of 5 Continental Associations (AANS, AASNS, CAANS, EANS and FLANC), 6 Affiliate Societies, and 119 National Neurosurgical Societies, representing some 50,000 neurosurgeons worldwide. It has a consultative status in the United Nations. The official Journal of the Organization is World Neurosurgery. The other global organisations being the World Academy of Neurological Surgery (WANS) and the World Federation of Skull Base Societies (WFSBS). Main divisions General neurosurgery involves most neurosurgical conditions including neuro-trauma and other neuro-emergencies such as intracranial hemorrhage. Most level 1 hospitals have this kind of practice. Specialized branches have developed to cater to special and difficult conditions. These specialized branches co-exist with general neurosurgery in more sophisticated hospitals. To practice advanced specialization within neurosurgery, additional higher fellowship training of one to two years is expected from the neurosurgeon. Some of these divisions of neurosurgery are: Vascular neurosurgery includes clipping of aneurysms and performing carotid endarterectomy (CEA). Stereotactic neurosurgery, functional neurosurgery, and epilepsy surgery (the latter includes partial or total corpus callosotomy – severing part or all of the corpus callosum to stop or lessen seizure spread and activity, and the surgical removal of functional, physiological and/or anatomical pieces or divisions of the brain, called epileptic foci, that are operable and that are causing seizures, and also the more radical and rare partial or total lobectomy, or even hemispherectomy – the removal of part or all of one of the lobes, or one of the cerebral hemispheres of the brain; those two procedures, when possible, are also very, very rarely used in oncological neurosurgery or to treat very severe neurological trauma, such as stab or gunshot wounds to the brain) Oncological neurosurgery also called neurosurgical oncology; includes pediatric oncological neurosurgery; treatment of benign and malignant central and peripheral nervous system cancers and pre-cancerous lesions in adults and children (including, among others, glioblastoma multiforme and other gliomas, brain stem cancer, astrocytoma, pontine glioma, medulloblastoma, spinal cancer, tumors of the meninges and intracranial spaces, secondary metastases to the brain, spine, and nerves, and peripheral nervous system tumors) Skull base surgery Spinal neurosurgery Peripheral nerve surgery Pediatric neurosurgery (for cancer, seizures, bleeding, stroke, cognitive disorders or congenital neurological disorders) Commonly performed surgeries According to an analysis by the American College of Surgeons National Surgical Quality Improvement Program (NSQIP), the most common surgeries performed by neurosurgeons in between 2006 and 2014 were the following: Anterior cervical discectomy and fusion (ACDF) Craniotomy for brain tumor (CBT) Discectomy Laminectomy Posterolateral lumbar fusion (PLF) Neuropathology Neuropathology is a specialty within the study of pathology focused on the disease of the brain, spinal cord, and neural tissue. This includes the central nervous system and the peripheral nervous system. Tissue analysis comes from either surgical biopsies or post mortem autopsies. Common tissue samples include muscle fibers and nervous tissue. Common applications of neuropathology include studying samples of tissue in patients who have Parkinson's disease, Alzheimer's disease, dementia, Huntington's disease, amyotrophic lateral sclerosis, mitochondria disease, and any disorder that has neural deterioration in the brain or spinal cord. History While pathology has been studied for millennia only within the last few hundred years has medicine focused on a tissue- and organ-based approach to tissue disease. In 1810, Thomas Hodgkin started to look at the damaged tissue for the cause. This was conjoined with the emergence of microscopy and started the current understanding of how the tissue of the human body is studied. Neuroanesthesia Neuroanesthesia is a field of anesthesiology which focuses on neurosurgery. Anesthesia is not used during the middle of an "awake" brain surgery. Awake brain surgery is where the patient is conscious for the middle of the procedure and sedated for the beginning and end. This procedure is used when the tumor does not have clear boundaries and the surgeon wants to know if they are invading on critical regions of the brain which involve functions like talking, cognition, vision, and hearing. It will also be conducted for procedures which the surgeon is trying to combat epileptic seizures. History The physician Hippocrates (460–370 BCE) made accounts of using different wines to sedate patients while trepanning. In 60 CE, Dioscorides, a physician, pharmacologist, and botanist, detailed how mandrake, henbane, opium, and alcohol were used to put patients to sleep during trepanning. In 972 CE, two brother surgeons in Paramara, now India, used "samohine" to sedate a patient while removing a small tumor, and awoke the patient by pouring onion and vinegar in the patient's mouth. The combination of carbon dioxide, hydrogen, and nitrogen, was a form of neuroanesthesia adopted in the 18th century and introduced by Humphry Davy. Neurosurgery methods Various Imaging methods are used in modern neurosurgery diagnosis and treatment. They include computer assisted imaging computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), magnetoencephalography (MEG), and stereotactic radiosurgery. Some neurosurgery procedures involve the use of intra-operative MRI and functional MRI. In conventional neurosurgery the neurosurgeon opens the skull, creating a large opening to access the brain. Techniques involving smaller openings with the aid of microscopes and endoscopes are now being used as well. Methods that utilize small craniotomies in conjunction with high-clarity microscopic visualization of neural tissue offer excellent results. However, the open methods are still traditionally used in trauma or emergency situations. Microsurgery is utilized in many aspects of neurological surgery. Microvascular techniques are used in EC-IC bypass surgery and in restoration carotid endarterectomy. The clipping of an aneurysm is performed under microscopic vision. Minimally-invasive spine surgery utilizes microscopes or endoscopes. Procedures such as microdiscectomy, laminectomy, and artificial disc replacement rely on microsurgery. Using stereotaxy neurosurgeons can approach a minute target in the brain through a minimal opening. This is used in functional neurosurgery where electrodes are implanted or gene therapy is instituted with high level of accuracy as in the case of Parkinson's disease or Alzheimer's disease. Using the combination method of open and stereotactic surgery, intraventricular hemorrhages can potentially be evacuated successfully. Conventional surgery using image guidance technologies is also becoming common and is referred to as surgical navigation, computer-assisted surgery, navigated surgery, stereotactic navigation. Similar to a car or mobile Global Positioning System (GPS), image-guided surgery systems, like Curve Image Guided Surgery and StealthStation, use cameras or electromagnetic fields to capture and relay the patient's anatomy and the surgeon's precise movements in relation to the patient, to computer monitors in the operating room. These sophisticated computerized systems are used before and during surgery to help orient the surgeon with three-dimensional images of the patient's anatomy including the tumor. Real-time functional brain mapping has been employed to identify specific functional regions using electrocorticography (ECoG) Minimally invasive endoscopic surgery is commonly utilized by neurosurgeons when appropriate. Techniques such as endoscopic endonasal surgery are used in pituitary tumors, craniopharyngiomas, chordomas, and the repair of cerebrospinal fluid leaks. Ventricular endoscopy is used in the treatment of intraventricular bleeds, hydrocephalus, colloid cyst and neurocysticercosis. Endonasal endoscopy is at times carried out with neurosurgeons and ENT surgeons working together as a team. Repair of craniofacial disorders and disturbance of cerebrospinal fluid circulation is done by neurosurgeons who also occasionally team up with maxillofacial and plastic surgeons. Cranioplasty for craniosynostosis is performed by pediatric neurosurgeons with or without plastic surgeons. Neurosurgeons are involved in stereotactic radiosurgery along with radiation oncologists in tumor and AVM treatment. Radiosurgical methods such as Gamma knife, Cyberknife and Novalis Radiosurgery are used as well. Endovascular neurosurgery utilize endovascular image guided procedures for the treatment of aneurysms, AVMs, carotid stenosis, strokes, and spinal malformations, and vasospasms. Techniques such as angioplasty, stenting, clot retrieval, embolization, and diagnostic angiography are endovascular procedures. A common procedure performed in neurosurgery is the placement of ventriculo-peritoneal shunt (VP shunt). In pediatric practice this is often implemented in cases of congenital hydrocephalus. The most common indication for this procedure in adults is normal pressure hydrocephalus (NPH). Neurosurgery of the spine covers the cervical, thoracic and lumbar spine. Some indications for spine surgery include spinal cord compression resulting from trauma, arthritis of the spinal discs, or spondylosis. In cervical cord compression, patients may have difficulty with gait, balance issues, and/or numbness and tingling in the hands or feet. Spondylosis is the condition of spinal disc degeneration and arthritis that may compress the spinal canal. This condition can often result in bone-spurring and disc herniation. Power drills and special instruments are often used to correct any compression problems of the spinal canal. Disc herniations of spinal vertebral discs are removed with special rongeurs. This procedure is known as a discectomy. Generally once a disc is removed it is replaced by an implant which will create a bony fusion between vertebral bodies above and below. Instead, a mobile disc could be implanted into the disc space to maintain mobility. This is commonly used in cervical disc surgery. At times instead of disc removal a Laser discectomy could be used to decompress a nerve root. This method is mainly used for lumbar discs. Laminectomy is the removal of the lamina of the vertebrae of the spine in order to make room for the compressed nerve tissue. Surgery for chronic pain is a sub-branch of functional neurosurgery. Some of the techniques include implantation of deep brain stimulators, spinal cord stimulators, peripheral stimulators and pain pumps. Surgery of the peripheral nervous system is also possible, and includes the very common procedures of carpal tunnel decompression and peripheral nerve transposition. Numerous other types of nerve entrapment conditions and other problems with the peripheral nervous system are treated as well. Conditions Conditions treated by neurosurgeons include, but are not limited to: Meningitis and other central nervous system infections including abscesses Spinal disc herniation Cervical spinal stenosis and Lumbar spinal stenosis Hydrocephalus Head trauma (brain hemorrhages, skull fractures, etc.) Spinal cord trauma Traumatic injuries of peripheral nerves Tumors of the spine, spinal cord and peripheral nerves Intracerebral hemorrhage, such as subarachnoid hemorrhage, interdepartmental, and intracellular hemorrhages Some forms of drug-resistant epilepsy Some forms of movement disorders (advanced Parkinson's disease, chorea)this involves the use of specially developed minimally invasive stereotactic techniques (functional, stereotactic neurosurgery) such as ablative surgery and deep brain stimulation surgery Intractable pain of cancer or trauma patients and cranial/peripheral nerve pain Some forms of intractable psychiatric disorders Vascular malformations (i.e., arteriovenous malformations, venous angiomas, cavernous angiomas, capillary telangectasias) of the brain and spinal cord Moyamoya disease Recovery Postoperative pain Pain following brain surgery can be significant and may lengthen recovery, increase the amount of time a person stays in the hospital following surgery, and increase the risk of complications following surgery. Severe acute pain following brain surgery may also increase the risk of a person developing a chronic post-craniotomy headache. Approaches to treating pain in adults include treatment with nonsteroidal anti‐inflammatory drugs (NSAIDs), which have been shown to reduce pain for up to 24 hours following surgery. Low-quality evidence supports the use of the medications dexmedetomidine, pregabalin or gabapentin to reduce post-operative pain. Low-quality evidence also supports scalp blocks and scalp infiltration to reduce postoperative pain. Gabapentin or pregabalin may also decrease vomiting and nausea following surgery, based on very low-quality medical evidence. Notable neurosurgeons Saleem Abdulrauf – developed "awake" craniotomy for complex aneurysms and vascular malformations. John R. AdlerStanford University neurosurgeon who invented the Cyberknife. Alim-Louis Benabidknown as one of the developers of deep brain stimulation surgery for movement disorder. Ben Carsonretired pediatric neurosurgeon from Johns Hopkins Hospital, pioneer in hemispherectomy, and pioneer in the separation of craniopagus twins (joined at the head); former 2016 Republican Party presidential candidate, and former United States Secretary of Housing and Urban Development under the Presidency of Donald Trump. Harvey Cushingknown as one of the fathers of modern Neurosurgery. Walter Dandyknown as one of the founding fathers of modern Neurosurgery. Christopher Duntsch – Former neurosurgeon who killed or maimed nearly every patient he operated on before being incarcerated. Victor Horsleyknown as the first neurosurgeon. Lars LeksellSwedish neurosurgeon who developed the Gamma Knife. Wirginia Maixnerpediatric neurosurgeon at Melbourne's Royal Children's Hospital. Primarily known for separating conjoined Bangladeshi twins, Trishna and Krishna. Henry Marsh – leading English neurosurgeon and pioneer of neurosurgical advancements in Ukraine Frank Henderson Mayfieldinvented the Mayfield skull clamp. B. K. Misra – First neurosurgeon in the world to perform image-guided surgery for aneurysms, first in South Asia to perform stereotactic radiosurgery, first in India to perform awake craniotomy and laparoscopic spine surgery. Karin Muraszkofirst woman to occupy a chair of neurosurgery at an American medical school (University of Michigan). Hirotaro Narabayashia pioneer of stereotactic Neurosurgery. Ayub K. Ommayainvented the Ommaya reservoir. Wilder Penfieldknown as one of the founding fathers of modern neurosurgery, and pioneer of epilepsy Neurosurgery. Ludvig Puuseppknown as one of the founding fathers of modern neurosurgery, world's first professor of neurosurgery. Joseph Ransohoffknown for his pioneering use of medical imaging and catheterization in neurosurgery, and for founding the first neurosurgery intensive care unit. Majid Samiipioneer of cerebello-pontine angle tumor surgery. World Federation of Neurosurgical Societies coined a medal of honor bearing Samii's name which would be given to outstanding neurosurgeons every two years. Juliet Sekabunga Nalwanga – Uganda's first female neurosurgeon. Hermann Schloffer invented transsphenoidal surgery in 1907. Robert Wheeler Rand along with Theodore Kurze, MD was among the first to introduce the surgical microscope into neurosurgical procedures in 1957 and published first textbook on Microneurosurgery in 1969. Robert J. White – Established the Vatican's Commission on Biomedical Ethics in 1981 after his appointment to the Pontifical Academy of Sciences and was famous for his head transplants on living monkeys. Gazi Yaşargilknown as the father of microneurosurgery. Sunandan Basu - renowned Neurosurgeon in Eastern India Bioethics in neurosurgery Neurosurgery is a part of practical medicine and the only specialty that involves invasive intervention in the activity of the living brain. The brain ensures the structural and functional integrity of the body and the implementation of all the main life processes of the body. Therefore, neurosurgery faces a wide range of bioethical issues and a significant selection of the latest treatment technologies. Neurosurgery has the following applied scientific and ethical problems: Ethical and legal aspects of clinical research; Αxiological deficit due to professional deformation and professional burnout; Limited access to expensive medical services; The industry-specific problem of "medical error" due to the complexity of neurosurgical pathologies and the huge number of possible technologies and tools for their treatment; Controversial bioethical and legal issues of surgery for the treatment of psychiatric diseases; Bioethical discussions regarding the instrumentation of reconstructive surgery, through the use of experimental technologies; Debatable bioethical issues of improving human brain activity with the help of artificial implants, for instance neurocomponents (artificial impulse quasi-neurons); Cyborgization in transhumanism meaning; Ethical issue of standardization of research protocols for testing neuroengineering means of nerve tissue regeneration in order to improve the implementation of experimental research results in clinical practice.
Biology and health sciences
Surgery
Health
21854
https://en.wikipedia.org/wiki/Navigation
Navigation
Navigation is a field of study that focuses on the process of monitoring and controlling the movement of a craft or vehicle from one place to another. The field of navigation includes four general categories: land navigation, marine navigation, aeronautic navigation, and space navigation. It is also the term of art used for the specialized knowledge used by navigators to perform navigation tasks. All navigational techniques involve locating the navigator's position compared to known locations or patterns. Navigation, in a broader sense, can refer to any skill or study that involves the determination of position and direction. In this sense, navigation includes orienteering and pedestrian navigation. History In the European medieval period, navigation was considered part of the set of seven mechanical arts, none of which were used for long voyages across open ocean. Polynesian navigation is probably the earliest form of open-ocean navigation; it was based on memory and observation recorded on scientific instruments like the Marshall Islands Stick Charts of Ocean Swells. Early Pacific Polynesians used the motion of stars, weather, the position of certain wildlife species, or the size of waves to find the path from one island to another. Maritime navigation using scientific instruments such as the mariner's astrolabe first occurred in the Mediterranean during the Middle Ages. Although land astrolabes were invented in the Hellenistic period and existed in classical antiquity and the Islamic Golden Age, the oldest record of a sea astrolabe is that of Spanish astronomer Ramon Llull dating from 1295. The perfecting of this navigation instrument is attributed to Portuguese navigators during early Portuguese discoveries in the Age of Discovery. The earliest known description of how to make and use a sea astrolabe comes from Spanish cosmographer Martín Cortés de Albacar's Arte de Navegar (The Art of Navigation) published in 1551, based on the principle of the archipendulum used in constructing the Egyptian pyramids. Open-seas navigation using the astrolabe and the compass started during the Age of Discovery in the 15th century. The Portuguese began systematically exploring the Atlantic coast of Africa from 1418, under the sponsorship of Prince Henry. In 1488 Bartolomeu Dias reached the Indian Ocean by this route. In 1492 the Spanish monarchs funded Christopher Columbus's expedition to sail west to reach the Indies by crossing the Atlantic, which resulted in the Discovery of the Americas. In 1498, a Portuguese expedition commanded by Vasco da Gama reached India by sailing around Africa, opening up direct trade with Asia. Soon, the Portuguese sailed further eastward, to the Spice Islands in 1512, landing in China one year later. The first circumnavigation of the earth was completed in 1522 with the Magellan-Elcano expedition, a Spanish voyage of discovery led by Portuguese explorer Ferdinand Magellan and completed by Spanish navigator Juan Sebastián Elcano after the former's death in the Philippines in 1521. The fleet of seven ships sailed from Sanlúcar de Barrameda in Southern Spain in 1519, crossed the Atlantic Ocean and after several stopovers rounded the southern tip of South America. Some ships were lost, but the remaining fleet continued across the Pacific making a number of discoveries including Guam and the Philippines. By then, only two galleons were left from the original seven. The Victoria led by Elcano sailed across the Indian Ocean and north along the coast of Africa, to finally arrive in Spain in 1522, three years after its departure. The Trinidad sailed east from the Philippines, trying to find a maritime path back to the Americas, but was unsuccessful. The eastward route across the Pacific, also known as the tornaviaje (return trip) was only discovered forty years later, when Spanish cosmographer Andrés de Urdaneta sailed from the Philippines, north to parallel 39°, and hit the eastward Kuroshio Current which took its galleon across the Pacific. He arrived in Acapulco on October 8, 1565. Etymology The term stems from the 1530s, from Latin navigationem (nom. navigatio), from navigatus, pp. of navigare "to sail, sail over, go by sea, steer a ship," from navis "ship" and the root of agere "to drive". Basic concepts Latitude Roughly, the latitude of a place on Earth is its angular distance north or south of the equator. Latitude is usually expressed in degrees (marked with °) ranging from 0° at the Equator to 90° at the North and South poles. The latitude of the North Pole is 90° N, and the latitude of the South Pole is 90° S. Mariners calculated latitude in the Northern Hemisphere by sighting the pole star (Polaris) with a sextant and using sight reduction tables to correct for height of eye and atmospheric refraction. The height of Polaris in degrees above the horizon is the latitude of the observer, within a degree or so. Longitude Similar to latitude, the longitude of a place on Earth is the angular distance east or west of the prime meridian or Greenwich meridian. Longitude is usually expressed in degrees (marked with °) ranging from 0° at the Greenwich meridian to 180° east and west. Sydney, for example, has a longitude of about 151° east. New York City has a longitude of 74° west. For most of history, mariners struggled to determine longitude. Longitude can be calculated if the precise time of a sighting is known. Lacking that, one can use a sextant to take a lunar distance (also called the lunar observation, or "lunar" for short) that, with a nautical almanac, can be used to calculate the time at zero longitude (see Greenwich Mean Time). Reliable marine chronometers were unavailable until the late 18th century and not affordable until the 19th century. For about a hundred years, from about 1767 until about 1850, mariners lacking a chronometer used the method of lunar distances to determine Greenwich time to find their longitude. A mariner with a chronometer could check its reading using a lunar determination of Greenwich time. Loxodrome In navigation, a rhumb line (or loxodrome) is a line crossing all meridians of longitude at the same angle, i.e. a path derived from a defined initial bearing. That is, upon taking an initial bearing, one proceeds along the same bearing, without changing the direction as measured relative to true or magnetic north. Methods of navigation Most modern navigation relies primarily on positions determined electronically by receivers collecting information from satellites. Most other modern techniques rely on finding intersecting lines of position or LOP. A line of position can refer to two different things, either a line on a chart or a line between the observer and an object in real life. A bearing is a measure of the direction to an object. If the navigator measures the direction in real life, the angle can then be drawn on a nautical chart and the navigator will be somewhere on that bearing line on the chart. In addition to bearings, navigators also often measure distances to objects. On the chart, a distance produces a circle or arc of position. Circles, arcs, and hyperbolae of positions are often referred to as lines of position. If the navigator draws two lines of position, and they intersect he must be at that position. A fix is the intersection of two or more LOPs. If only one line of position is available, this may be evaluated against the dead reckoning position to establish an estimated position. Lines (or circles) of position can be derived from a variety of sources: celestial observation (a short segment of the circle of equal altitude, but generally represented as a line), terrestrial range (natural or man made) when two charted points are observed to be in line with each other, compass bearing to a charted object, radar range to a charted object, on certain coastlines, a depth sounding from echo sounder or hand lead line. There are some methods seldom used today such as "dipping a light" to calculate the geographic range from observer to lighthouse. Methods of navigation have changed through history. Each new method has enhanced the mariner's ability to complete his voyage. One of the most important judgments the navigator must make is the best method to use. Some types of navigation are depicted in the table. The practice of navigation usually involves a combination of these different methods. Mental navigation checks By mental navigation checks, a pilot or a navigator estimates tracks, distances, and altitudes which will then help the pilot avoid gross navigation errors. Piloting Piloting (also called pilotage) involves navigating an aircraft by visual reference to landmarks, or a water vessel in restricted waters and fixing its position as precisely as possible at frequent intervals. More so than in other phases of navigation, proper preparation and attention to detail are important. Procedures vary from vessel to vessel, and between military, commercial, and private vessels. As pilotage takes place in shallow waters, it typically involves following courses to ensure sufficient under keel clearance, ensuring a sufficient depth of water below the hull as well as a consideration for squat. It may also involve navigating a ship within a river, canal or channel in close proximity to land. A military navigation team will nearly always consist of several people. A military navigator might have bearing takers stationed at the gyro repeaters on the bridge wings for taking simultaneous bearings, while the civilian navigator on a merchant ship or leisure craft must often take and plot their position themselves, typically with the aid of electronic position fixing. While the military navigator will have a bearing book and someone to record entries for each fix, the civilian navigator will simply pilot the bearings on the chart as they are taken and not record them at all. If the ship is equipped with an ECDIS, it is reasonable for the navigator to simply monitor the progress of the ship along the chosen track, visually ensuring that the ship is proceeding as desired, checking the compass, sounder and other indicators only occasionally. If a pilot is aboard, as is often the case in the most restricted of waters, his judgement can generally be relied upon, further easing the workload. But should the ECDIS fail, the navigator will have to rely on his skill in the manual and time-tested procedures. Celestial navigation Celestial navigation systems are based on observation of the positions of the Sun, Moon, planets and navigational stars. Such systems are in use as well for terrestrial navigating as for interstellar navigating. By knowing which point on the rotating Earth a celestial object is above and measuring its height above the observer's horizon, the navigator can determine his distance from that subpoint. A nautical almanac and a marine chronometer are used to compute the subpoint on Earth a celestial body is over, and a sextant is used to measure the body's angular height above the horizon. That height can then be used to compute distance from the subpoint to create a circular line of position. A navigator shoots a number of stars in succession to give a series of overlapping lines of position. Where they intersect is the celestial fix. The Moon and Sun may also be used. The Sun can also be used by itself to shoot a succession of lines of position (best done around local noon) to determine a position. Marine chronometer In order to accurately measure longitude, the precise time of a sextant sighting (down to the second, if possible) must be recorded. Each second of error is equivalent to 15 seconds of longitude error, which at the equator is a position error of .25 of a nautical mile, about the accuracy limit of manual celestial navigation. The spring-driven marine chronometer is a precision timepiece used aboard ship to provide accurate time for celestial observations. A chronometer differs from a spring-driven watch principally in that it contains a variable lever device to maintain even pressure on the mainspring, and a special balance designed to compensate for temperature variations. A spring-driven chronometer is set approximately to Greenwich mean time (GMT) and is not reset until the instrument is overhauled and cleaned, usually at three-year intervals. The difference between GMT and chronometer time is carefully determined and applied as a correction to all chronometer readings. Spring-driven chronometers must be wound at about the same time each day. Quartz crystal marine chronometers have replaced spring-driven chronometers aboard many ships because of their greater accuracy. They are maintained on GMT directly from radio time signals. This eliminates chronometer error and watch error corrections. Should the second hand be in error by a readable amount, it can be reset electrically. The basic element for time generation is a quartz crystal oscillator. The quartz crystal is temperature compensated and is hermetically sealed in an evacuated envelope. A calibrated adjustment capability is provided to adjust for the aging of the crystal. The chronometer is designed to operate for a minimum of one year on a single set of batteries. Observations may be timed and ship's clocks set with a comparing watch, which is set to chronometer time and taken to the bridge wing for recording sight times. In practice, a wrist watch coordinated to the nearest second with the chronometer will be adequate. A stop watch, either spring wound or digital, may also be used for celestial observations. In this case, the watch is started at a known GMT by chronometer, and the elapsed time of each sight added to this to obtain GMT of the sight. All chronometers and watches should be checked regularly with a radio time signal. Times and frequencies of radio time signals are listed in publications such as Radio Navigational Aids. The marine sextant The second critical component of celestial navigation is to measure the angle formed at the observer's eye between the celestial body and the sensible horizon. The sextant, an optical instrument, is used to perform this function. The sextant consists of two primary assemblies. The frame is a rigid triangular structure with a pivot at the top and a graduated segment of a circle, referred to as the "arc", at the bottom. The second component is the index arm, which is attached to the pivot at the top of the frame. At the bottom is an endless vernier which clamps into teeth on the bottom of the "arc". The optical system consists of two mirrors and, generally, a low power telescope. One mirror, referred to as the "index mirror" is fixed to the top of the index arm, over the pivot. As the index arm is moved, this mirror rotates, and the graduated scale on the arc indicates the measured angle ("altitude"). The second mirror, referred to as the "horizon glass", is fixed to the front of the frame. One half of the horizon glass is silvered and the other half is clear. Light from the celestial body strikes the index mirror and is reflected to the silvered portion of the horizon glass, then back to the observer's eye through the telescope. The observer manipulates the index arm so the reflected image of the body in the horizon glass is just resting on the visual horizon, seen through the clear side of the horizon glass. Adjustment of the sextant consists of checking and aligning all the optical elements to eliminate "index correction". Index correction should be checked, using the horizon or more preferably a star, each time the sextant is used. The practice of taking celestial observations from the deck of a rolling ship, often through cloud cover and with a hazy horizon, is by far the most challenging part of celestial navigation. Inertial navigation Inertial navigation system (INS) is a dead reckoning type of navigation system that computes its position based on motion sensors. Before actually navigating, the initial latitude and longitude and the INS's physical orientation relative to the Earth (e.g., north and level) are established. After alignment, an INS receives impulses from motion detectors that measure (a) the acceleration along three axes (accelerometers), and (b) rate of rotation about three orthogonal axes (gyroscopes). These enable an INS to continually and accurately calculate its current latitude and longitude (and often velocity). Advantages over other navigation systems are that, once aligned, an INS does not require outside information. An INS is not affected by adverse weather conditions and it cannot be detected or jammed. Its disadvantage is that since the current position is calculated solely from previous positions and motion sensors, its errors are cumulative, increasing at a rate roughly proportional to the time since the initial position was input. Inertial navigation systems must therefore be frequently corrected with a location 'fix' from some other type of navigation system. The first inertial system is considered to be the V-2 guidance system deployed by the Germans in 1942. However, inertial sensors are traced to the early 19th century. The advantages INSs led their use in aircraft, missiles, surface ships and submarines. For example, the U.S. Navy developed the Ships Inertial Navigation System (SINS) during the Polaris missile program to ensure a reliable and accurate navigation system to initial its missile guidance systems. Inertial navigation systems were in wide use until satellite navigation systems (GPS) became available. INSs are still in common use on submarines (since GPS reception or other fix sources are not possible while submerged) and long-range missiles. Space navigation Not to be confused with satellite navigation, which depends upon satellites to function, space navigation refers to the navigation of spacecraft themselves. This has historically been achieved (during the Apollo program) via a navigational computer, an Inertial navigation system, and via celestial inputs entered by astronauts which were recorded by sextant and telescope. Space rated navigational computers, like those found on Apollo and later missions, are designed to be hardened against possible data corruption from radiation. Another possibility that has been explored for deep space navigation is Pulsar navigation, which compares the X-ray bursts from a collection of known pulsars in order to determine the position of a spacecraft. This method has been tested by multiple space agencies, such as NASA and ESA. Electronic navigation Radio navigation A radio direction finder or RDF is a device for finding the direction to a radio source. Due to radio's ability to travel very long distances "over the horizon", it makes a particularly good navigation system for ships and aircraft that might be flying at a distance from land. RDFs works by rotating a directional antenna and listening for the direction in which the signal from a known station comes through most strongly. This sort of system was widely used in the 1930s and 1940s. RDF antennas are easy to spot on German World War II aircraft, as loops under the rear section of the fuselage, whereas most US aircraft enclosed the antenna in a small teardrop-shaped fairing. In navigational applications, RDF signals are provided in the form of radio beacons, the radio version of a lighthouse. The signal is typically a simple AM broadcast of a morse code series of letters, which the RDF can tune in to see if the beacon is "on the air". Most modern detectors can also tune in any commercial radio stations, which is particularly useful due to their high power and location near major cities. Decca, OMEGA, and LORAN-C are three similar hyperbolic navigation systems. Decca was a hyperbolic low frequency radio navigation system (also known as multilateration) that was first deployed during World War II when the Allied forces needed a system which could be used to achieve accurate landings. As was the case with Loran C, its primary use was for ship navigation in coastal waters. Fishing vessels were major post-war users, but it was also used on aircraft, including a very early (1949) application of moving-map displays. The system was deployed in the North Sea and was used by helicopters operating to oil platforms. The OMEGA Navigation System was the first truly global radio navigation system for aircraft, operated by the United States in cooperation with six partner nations. OMEGA was developed by the United States Navy for military aviation users. It was approved for development in 1968 and promised a true worldwide oceanic coverage capability with only eight transmitters and the ability to achieve a four-mile (6 km) accuracy when fixing a position. Initially, the system was to be used for navigating nuclear bombers across the North Pole to Russia. Later, it was found useful for submarines. Due to the success of the Global Positioning System the use of Omega declined during the 1990s, to a point where the cost of operating Omega could no longer be justified. Omega was terminated on September 30, 1997, and all stations ceased operation. LORAN is a terrestrial navigation system using low frequency radio transmitters that use the time interval between radio signals received from three or more stations to determine the position of a ship or aircraft. The current version of LORAN in common use is LORAN-C, which operates in the low frequency portion of the EM spectrum from 90 to 110 kHz. Many nations are users of the system, including the United States, Japan, and several European countries. Russia uses a nearly exact system in the same frequency range, called CHAYKA. LORAN use is in steep decline, with GPS being the primary replacement. However, there are attempts to enhance and re-popularize LORAN. LORAN signals are less susceptible to interference and can penetrate better into foliage and buildings than GPS signals. Radar navigation Radar is an effective aid to navigation because it provides ranges and bearings to objects within range of the radar scanner. When a vessel (ship or boat) is within radar range of land or fixed objects (such as special radar aids to navigation and navigation marks) the navigator can take distances and angular bearings to charted objects and use these to establish arcs of position and lines of position on a chart. A fix consisting of only radar information is called a radar fix. Types of radar fixes include "range and bearing to a single object," "two or more bearings," "tangent bearings," and "two or more ranges." Radar can also be used with ECDIS as a means of position fixing with the radar image or distance/bearing overlaid onto an Electronic nautical chart. Parallel indexing is a technique defined by William Burger in the 1957 book The Radar Observer's Handbook. This technique involves creating a line on the screen that is parallel to the ship's course, but offset to the left or right by some distance. This parallel line allows the navigator to maintain a given distance away from hazards. The line on the radar screen is set to a specific distance and angle, then the ship's position relative to the parallel line is observed. This can provide an immediate reference to the navigator as to whether the ship is on or off its intended course for navigation. Other techniques that are less used in general navigation have been developed for special situations. One, known as the "contour method," involves marking a transparent plastic template on the radar screen and moving it to the chart to fix a position. Another special technique, known as the Franklin Continuous Radar Plot Technique, involves drawing the path a radar object should follow on the radar display if the ship stays on its planned course. During the transit, the navigator can check that the ship is on track by checking that the pip lies on the drawn line. Satellite navigation Global Navigation Satellite System or GNSS is the term for satellite navigation systems that provide positioning with global coverage. A GNSS allow small electronic receivers to determine their location (longitude, latitude, and altitude) within a few meters using time signals transmitted along a line of sight by radio from satellites. Receivers on the ground with a fixed position can also be used to calculate the precise time as a reference for scientific experiments. As of October 2011, only the United States NAVSTAR Global Positioning System (GPS) and the Russian GLONASS are fully globally operational GNSSs. The European Union's Galileo positioning system is a next generation GNSS in the final deployment phase, and became operational in 2016. China has indicated it may expand its regional Beidou navigation system into a global system. More than two dozen GPS satellites are in medium Earth orbit, transmitting signals allowing GPS receivers to determine the receiver's location, speed and direction. Since the first experimental satellite was launched in 1978, GPS has become an indispensable aid to navigation around the world, and an important tool for map-making and land surveying. GPS also provides a precise time reference used in many applications including scientific study of earthquakes, and synchronization of telecommunications networks. Developed by the United States Department of Defense, GPS is officially named NAVSTAR GPS (NAVigation Satellite Timing And Ranging Global Positioning System). The satellite constellation is managed by the United States Air Force 50th Space Wing. The cost of maintaining the system is approximately US$750 million per year, including the replacement of aging satellites, and research and development. Despite this fact, GPS is free for civilian use as a public good. Modern smartphones act as personal GPS navigators for civilians who own them. Overuse of these devices, whether in the vehicle or on foot, can lead to a relative inability to learn about navigated environments, resulting in sub-optimal navigation abilities when and if these devices become unavailable. Typically a compass is also provided to determine direction when not moving. Acoustic navigation Navigation processes Ships and similar vessels One day's work in navigation The day's work in navigation is a minimal set of tasks consistent with prudent navigation. The definition will vary on military and civilian vessels, and from ship to ship, but the traditional method takes a form resembling: Maintain a continuous dead reckoning plot. Take two or more star observations at morning twilight for a celestial fix (prudent to observe six stars). Morning Sun observation. Can be taken on or near prime vertical for longitude, or at any time for a line of position. Determine compass error by azimuth observation of the Sun. Computation of the interval to noon, watch time of local apparent noon, and constants for meridian or ex-meridian sights. Noontime meridian or ex-meridian observation of the Sun for noon latitude line. Running fix or cross with Venus line for noon fix. Noontime determination the day's run and day's set and drift. At least one afternoon Sun line, in case the stars are not visible at twilight. Determine compass error by azimuth observation of the Sun. Take two or more star observations at evening twilight for a celestial fix (prudent to observe six stars). Navigation on ships is usually always conducted on the bridge. It may also take place in adjacent space, where chart tables and publications are available. Passage planning Passage planning or voyage planning is a procedure to develop a complete description of vessel's voyage from start to finish. The plan includes leaving the dock and harbor area, the en route portion of a voyage, approaching the destination, and mooring. According to international law, a vessel's captain is legally responsible for passage planning, however on larger vessels, the task will be delegated to the ship's navigator. Studies show that human error is a factor in 80 percent of navigational accidents and that in many cases the human making the error had access to information that could have prevented the accident. The practice of voyage planning has evolved from penciling lines on nautical charts to a process of risk management. Passage planning consists of four stages: appraisal, planning, execution, and monitoring, which are specified in International Maritime Organization Resolution A.893(21), Guidelines For Voyage Planning, and these guidelines are reflected in the local laws of IMO signatory countries (for example, Title 33 of the U.S. Code of Federal Regulations), and a number of professional books or publications. There are some fifty elements of a comprehensive passage plan depending on the size and type of vessel. The appraisal stage deals with the collection of information relevant to the proposed voyage as well as ascertaining risks and assessing the key features of the voyage. This will involve considering the type of navigation required e.g. Ice navigation, the region the ship will be passing through and the hydrographic information on the route. In the next stage, the written plan is created. The third stage is the execution of the finalised voyage plan, taking into account any special circumstances which may arise such as changes in the weather, which may require the plan to be reviewed or altered. The final stage of passage planning consists of monitoring the vessel's progress in relation to the plan and responding to deviations and unforeseen circumstances. Integrated bridge systems Electronic integrated bridge concepts are driving future navigation system planning. Integrated systems take inputs from various ship sensors, electronically display positioning information, and provide control signals required to maintain a vessel on a preset course. The navigator becomes a system manager, choosing system presets, interpreting system output, and monitoring vessel response. Land navigation Navigation for cars and other land-based travel typically uses maps, landmarks, and in recent times computer navigation ("satnav", short for satellite navigation), as well as any means available on water. Computerized navigation commonly relies on GPS for current location information, a navigational map database of roads and navigable routes, and uses algorithms related to the shortest path problem to identify optimal routes. Pedestrian navigation is involved in orienteering, land navigation (military), and wayfinding. Underwater navigation Standards, training and organisations Professional standards for navigation depend on the type of navigation and vary by country. For marine navigation, Merchant Navy deck officers are trained and internationally certified according to the STCW Convention. Leisure and amateur mariners may undertake lessons in navigation at local/regional training schools. Naval officers receive navigation training as part of their naval training. In land navigation, courses and training is often provided to young persons as part of general or extra-curricular education. Land navigation is also an essential part of army training. Additionally, organisations such as the Scouts and DoE programme teach navigation to their students. Orienteering organisations are a type of sports that require navigational skills using a map and compass to navigate from point to point in diverse and usually unfamiliar terrain whilst moving at speed. In aviation, pilots undertake air navigation training as part of learning to fly. Professional organisations also assist to encourage improvements in navigation or bring together navigators in learned environments. The Royal Institute of Navigation (RIN) is a learned society with charitable status, aimed at furthering the development of navigation on land and sea, in the air and in space. It was founded in 1947 as a forum for mariners, pilots, engineers and academics to compare their experiences and exchange information. In the US, the Institute of Navigation (ION) is a non-profit professional organisation advancing the art and science of positioning, navigation and timing. Publications Numerous nautical publications are available on navigation, which are published by professional sources all over the world. In the UK, the United Kingdom Hydrographic Office, the Witherby Publishing Group and the Nautical Institute provide numerous navigational publications, including the comprehensive Admiralty Manual of Navigation. In the US, Bowditch's American Practical Navigator is a free available encyclopedia of navigation issued by the US Government. Navigation in spatial cognition Navigation is an essential everyday activity that involves a series of abilities that help humans and animals to locate, track, and follow paths in order to arrive at different destinations. Navigation, in spatial cognition, allows for acquiring information about the environment by using the body and landmarks of the environment as frames of references to create mental representations of our environment, also known as a cognitive map. Humans navigate by transitioning between different spaces and coordinating both egocentric and allocentric frames of reference. Navigation can be distinguished into two sptial components: locomotion and wayfinding. Locomotion is the process of movement from one place to another, both in humans and in animals. Locomotion helps you understand an environment by moving through a space in order to create a mental representation of it. Wayfinding is defined as an active process of following or deciding upon a path between one place to another through mental representations. It involves processes such as representation, planning and decision which help to avoid obstacles, to stay on course or to regulate pace when approaching particular objects. Navigation and wayfinding can be approached in the environmental space. According to Dan Montello’s space classification, there are four levels of space with the third being the environmental space. The environmental space represents a very large space, like a city, and can only be fully explored through movement since all objects and space are not directly visible. Also Barbara Tversky systematized the space, but this time taking into consideration the three dimensions that correspond to the axes of the human body and its extensions: above/below, front/back and left/right. Tversky ultimately proposed a fourfold classification of navigable space: space of the body, space around the body, space of navigation and space of graphics. Wayfinding There are two types of wayfinding in navigation: aided and unaided. Aided wayfinding requires a person to use various types of media, such as maps, GPS, directional signage, etc., in their navigation process which generally involves low spatial reasoning and is less cognitively demanding. Unaided wayfinding involves no such devices for the person who is navigating. Unaided wayfinding can be subdivided into a taxonomy of tasks depending on whether it is undirected or directed, which basically makes the distinction of whether there is a precise destination or not: undirected wayfinding means that a person is simply exploring an environment for pleasure without any set destination. Directed wayfinding, instead, can be further subdivided into search vs. target approximation. Search means that a person does not know where the destination is located and must find it either in an unfamiliar environment, which is labeled as an uninformed search, or in a familiar environment, labeled as an informed search. In target approximation, on the other hand, the location of the destination is known to the navigator but a further distinction is made based on whether the navigator knows how to arrive or not to the destination. Path following means that the environment, the path, and the destination are all known which means that the navigator simply follows the path they already know and arrive at the destination without much thought. For example, when you are in your city and walking on the same path as you normally take from your house to your job or university. However, path finding means that the navigator knows where the destination is but does not know the route they have to take to arrive at the destination: you know where a specific store is but you do not know how to arrive there or what path to take. If the navigator does not know the environment, it is called path search which means that only the destination is known while neither the path nor the environment is: you are in a new city and need to arrive at the train station but do not know how to get there. Path planning, on the other hand, means that the navigator knows both where the destination is and is familiar with the environment so they only need to plan the route or path that they should take to arrive at their target. For example, if you are in your city and need to get to a specific store that you know the destination of but do not know the specific path you need to take to get there.
Technology
Navigation and timekeeping
null
21865
https://en.wikipedia.org/wiki/Neurotransmitter
Neurotransmitter
A neurotransmitter is a signaling molecule secreted by a neuron to affect another cell across a synapse. The cell receiving the signal, or target cell, may be another neuron, but could also be a gland or muscle cell. Neurotransmitters are released from synaptic vesicles into the synaptic cleft where they are able to interact with neurotransmitter receptors on the target cell. Some neurotransmitters are also stored in large dense core vesicles. The neurotransmitter's effect on the target cell is determined by the receptor it binds to. Many neurotransmitters are synthesized from simple and plentiful precursors such as amino acids, which are readily available and often require a small number of biosynthetic steps for conversion. Neurotransmitters are essential to the function of complex neural systems. The exact number of unique neurotransmitters in humans is unknown, but more than 100 have been identified. Common neurotransmitters include glutamate, GABA, acetylcholine, glycine, dopamine and norepinephrine. Mechanism and cycle Synthesis Neurotransmitters are generally synthesized in neurons and are made up of, or derived from, precursor molecules that are found abundantly in the cell. Classes of neurotransmitters include amino acids, monoamines, and peptides. Monoamines are synthesized by altering a single amino acid. For example, the precursor of serotonin is the amino acid tryptophan. Peptide neurotransmitters, or neuropeptides, are protein transmitters which are larger than the classical small-molecule neurotransmitters and are often released together to elicit a modulatory effect. Purine neurotransmitters, like ATP, are derived from nucleic acids. Metabolic products such as nitric oxide and carbon monoxide have also been reported to act like neurotransmitters. Storage Neurotransmitters are generally stored in synaptic vesicles, clustered close to the cell membrane at the axon terminal of the presynaptic neuron. However, some neurotransmitters, like the metabolic gases carbon monoxide and nitric oxide, are synthesized and released immediately following an action potential without ever being stored in vesicles. Release Generally, a neurotransmitter is released via exocytosis at the presynaptic terminal in response to an electrical signal called an action potential in the presynaptic neuron. However, low-level "baseline" release also occurs without electrical stimulation. Neurotransmitters are released into and diffuse across the synaptic cleft, where they bind to specific receptors on the membrane of the postsynaptic neuron. Receptor interaction After being released into the synaptic cleft, neurotransmitters diffuse across the synapse where they are able to interact with receptors on the target cell. The effect of the neurotransmitter is dependent on the identity of the target cell's receptors present at the synapse. Depending on the receptor, binding of neurotransmitters may cause excitation, inhibition, or modulation of the postsynaptic neuron. Elimination In order to avoid continuous activation of receptors on the post-synaptic or target cell, neurotransmitters must be removed from the synaptic cleft. Neurotransmitters are removed through one of three mechanisms: Diffusion – neurotransmitters drift out of the synaptic cleft, where they are absorbed by glial cells. These glial cells, usually astrocytes, absorb the excess neurotransmitters. Astrocytes, a type of glial cell in the brain, actively contribute to synaptic communication through astrocytic diffusion or gliotransmission. Neuronal activity triggers an increase in astrocytic calcium levels, prompting the release of gliotransmitters, such as glutamate, ATP, and D-serine. These gliotransmitters diffuse into the extracellular space, interacting with nearby neurons and influencing synaptic transmission. By regulating extracellular neurotransmitter levels, astrocytes help maintain proper synaptic function. This bidirectional communication between astrocytes and neurons add complexity to brain signaling, with implications for brain function and neurological disorders. Enzyme degradation – proteins called enzymes break the neurotransmitters down. Reuptake – neurotransmitters are reabsorbed into the pre-synaptic neuron. Transporters, or membrane transport proteins, pump neurotransmitters from the synaptic cleft back into axon terminals (the presynaptic neuron) where they are stored for reuse. For example, acetylcholine is eliminated by having its acetyl group cleaved by the enzyme acetylcholinesterase; the remaining choline is then taken in and recycled by the pre-synaptic neuron to synthesize more acetylcholine. Other neurotransmitters are able to diffuse away from their targeted synaptic junctions and are eliminated from the body via the kidneys, or destroyed in the liver. Each neurotransmitter has very specific degradation pathways at regulatory points, which may be targeted by the body's regulatory system or medication. Cocaine blocks a dopamine transporter responsible for the reuptake of dopamine. Without the transporter, dopamine diffuses much more slowly from the synaptic cleft and continues to activate the dopamine receptors on the target cell. Discovery Until the early 20th century, scientists assumed that the majority of synaptic communication in the brain was electrical. However, through histological examinations by Ramón y Cajal, a 20 to 40 nm gap between neurons, known today as the synaptic cleft, was discovered. The presence of such a gap suggested communication via chemical messengers traversing the synaptic cleft, and in 1921 German pharmacologist Otto Loewi confirmed that neurons can communicate by releasing chemicals. Through a series of experiments involving the vagus nerves of frogs, Loewi was able to manually slow the heart rate of frogs by controlling the amount of saline solution present around the vagus nerve. Upon completion of this experiment, Loewi asserted that sympathetic regulation of cardiac function can be mediated through changes in chemical concentrations. Furthermore, Otto Loewi is credited with discovering acetylcholine (ACh) – the first known neurotransmitter. Identification To identify neurotransmitters, the following criteria are typically considered: Synthesis: The chemical must be produced within the neuron or be present in it as a precursor molecule. Release and response: When the neuron is activated, the chemical must be released and elicit a response in target cells or neurons. Experimental response: Application of the chemical directly to the target cells should produce the same response observed when the chemical is naturally released from neurons. Removal mechanism: There must be a mechanism in place to remove the neurotransmitter from its site of action once its signaling role is complete. However, given advances in pharmacology, genetics, and chemical neuroanatomy, the term "neurotransmitter" can be applied to chemicals that: Carry messages between neurons via influence on the postsynaptic membrane. Have little or no effect on membrane voltage, but have a common carrying function such as changing the structure of the synapse. Communicate by sending reverse-direction messages that affect the release or reuptake of transmitters. The anatomical localization of neurotransmitters is typically determined using immunocytochemical techniques, which identify the location of either the transmitter substances themselves or of the enzymes that are involved in their synthesis. Immunocytochemical techniques have also revealed that many transmitters, particularly the neuropeptides, are co-localized, that is, a neuron may release more than one transmitter from its synaptic terminal. Various techniques and experiments such as staining, stimulating, and collecting can be used to identify neurotransmitters throughout the central nervous system. Actions Neurons communicate with each other through synapses, specialized contact points where neurotransmitters transmit signals. When an action potential reaches the presynaptic terminal, the action potential can trigger the release of neurotransmitters into the synaptic cleft. These neurotransmitters then bind to receptors on the postsynaptic membrane, influencing the receiving neuron in either an inhibitory or excitatory manner. If the overall excitatory influences outweigh the inhibitory influences, the receiving neuron may generate its own action potential, continuing the transmission of information to the next neuron in the network. This process allows for the flow of information and the formation of complex neural networks. Modulation A neurotransmitter may have an excitatory, inhibitory or modulatory effect on the target cell. The effect is determined by the receptors the neurotransmitter interacts with at the post-synaptic membrane. Neurotransmitter influences trans-membrane ion flow either to increase (excitatory) or to decrease (inhibitory) the probability that the cell with which it comes in contact will produce an action potential. Synapses containing receptors with excitatory effects are called Type I synapses, while Type II synapses contain receptors with inhibitory effects. Thus, despite the wide variety of synapses, they all convey messages of only these two types. The two types are different appearance and are primarily located on different parts of the neurons under its influence. Receptors with modulatory effects are spread throughout all synaptic membranes and binding of neurotransmitters sets in motion signaling cascades that help the cell regulate its function. Binding of neurotransmitters to receptors with modulatory effects can have many results. For example, it may result in an increase or decrease in sensitivity to future stimulus by recruiting more or less receptors to the synaptic membrane. Type I (excitatory) synapses are typically located on the shafts or the spines of dendrites, whereas type II (inhibitory) synapses are typically located on a cell body. In addition, Type I synapses have round synaptic vesicles, whereas the vesicles of type II synapses are flattened. The material on the presynaptic and post-synaptic membranes is denser in a Type I synapse than it is in a Type II, and the Type I synaptic cleft is wider. Finally, the active zone on a Type I synapse is larger than that on a Type II synapse. The different locations of Type I and Type II synapses divide a neuron into two zones: an excitatory dendritic tree and an inhibitory cell body. From an inhibitory perspective, excitation comes in over the dendrites and spreads to the axon hillock to trigger an action potential. If the message is to be stopped, it is best stopped by applying inhibition on the cell body, close to the axon hillock where the action potential originates. Another way to conceptualize excitatory–inhibitory interaction is to picture excitation overcoming inhibition. If the cell body is normally in an inhibited state, the only way to generate an action potential at the axon hillock is to reduce the cell body's inhibition. In this "open the gates" strategy, the excitatory message is like a racehorse ready to run down the track, but first, the inhibitory starting gate must be removed. Neurotransmitter actions As explained above, the only direct action of a neurotransmitter is to activate a receptor. Therefore, the effects of a neurotransmitter system depend on the connections of the neurons that use the transmitter, and the chemical properties of the receptors. Glutamate is used at the great majority of fast excitatory synapses in the brain and spinal cord. It is also used at most synapses that are "modifiable", i.e. capable of increasing or decreasing in strength. Modifiable synapses are thought to be the main memory-storage elements in the brain. Excessive glutamate release can overstimulate the brain and lead to excitotoxicity causing cell death resulting in seizures or strokes. Excitotoxicity has been implicated in certain chronic diseases including ischemic stroke, epilepsy, amyotrophic lateral sclerosis, Alzheimer's disease, Huntington disease, and Parkinson's disease. GABA is used at the great majority of fast inhibitory synapses in virtually every part of the brain. Many sedative/tranquilizing drugs act by enhancing the effects of GABA. Glycine is the primary inhibitory neurotransmitter in the spinal cord. Acetylcholine was the first neurotransmitter discovered in the peripheral and central nervous systems. It activates skeletal muscles in the somatic nervous system and may either excite or inhibit internal organs in the autonomic system. It is main neurotransmitter at the neuromuscular junction connecting motor nerves to muscles. The paralytic arrow-poison curare acts by blocking transmission at these synapses. Acetylcholine also operates in many regions of the brain as a neuromodulatory, but using different types of receptors, including nicotinic and muscarinic receptors. Dopamine has a number of important functions in the brain. This includes critical role in the reward system, motivation and emotional arousal. It also plays an important role in fine motor control and Parkinson's disease has been linked to low levels of dopamine due to the loss of dopaminergic neurons in substantia nigra pars compacta. Schizophrenia, a highly heterogeneous and complicated disorder has been linked to high levels of dopamine. Serotonin is a monoamine neurotransmitter. Most of it is produced by the intestine (approximately 90%), and the remainder by central nervous system neurons at the raphe nuclei. It functions to regulate appetite, sleep, memory and learning, temperature, mood, behaviour, muscle contraction, and the functions of the cardiovascular system and endocrine system. It is speculated to have a role in depression, as some depressed patients have been reported to exhibit lower concentrations of metabolites of serotonin in their cerebrospinal fluid and brain tissue. Norepinephrine is a member of the catecholamine family of neurotransmitters. It is synthesized from the amino acid tyrosine. In the peripheral nervous system, one of the primary roles of norepinephrine is to stimulate the release of the stress hormone epinephrine (i.e. adrenaline) from the adrenal glands. Norepinephrine is involved in the fight-or-flight response and is also affected in anxiety disorders and depression. Epinephrine, a neurotransmitter and hormone is synthesized from tyrosine. It is released from the adrenal glands and also plays a role in the fight-or-flight response. Epinephrine has vasoconstrictive effects, which promote increased heart rate, blood pressure, energy mobilization. Vasoconstriction influences metabolism by promoting the breakdown of glucose released into the bloodstream. Epinephrine also has bronchodilation effects, which is the relaxing of airways. Types There are many different ways to classify neurotransmitters and are commonly classified into amino acids, monoamines and peptides. Some of the major neurotransmitters are: Amino acids: glutamate, aspartate, D-serine, gamma-Aminobutyric acid (GABA), glycine Gasotransmitters: nitric oxide (NO), carbon monoxide (CO), hydrogen sulfide (H2S) Monoamines: Catecholamines: dopamine (DA), norepinephrine (noradrenaline, NE), epinephrine (adrenaline) Indolamines: serotonin (5-HT, SER), melatonin histamine Trace amines: phenethylamine, N-methylphenethylamine, tyramine, 3-iodothyronamine, octopamine, tryptamine, etc. Peptides: oxytocin, somatostatin, substance P, cocaine and amphetamine regulated transcript, opioid peptides Purines: adenosine triphosphate (ATP), adenosine Others: acetylcholine (ACh), anandamide, etc. In addition, over 100 neuroactive peptides have been found, and new ones are discovered regularly. Many of these are co-released along with a small-molecule transmitter. Nevertheless, in some cases, a peptide is the primary transmitter at a synapse. Beta-Endorphin is a relatively well-known example of a peptide neurotransmitter because it engages in highly specific interactions with opioid receptors in the central nervous system. Single ions (such as synaptically released zinc) are also considered neurotransmitters by some, as well as some gaseous molecules such as nitric oxide (NO), carbon monoxide (CO), and hydrogen sulfide (H2S). The gases are produced in the neural cytoplasm and are immediately diffused through the cell membrane into the extracellular fluid and into nearby cells to stimulate production of second messengers. Soluble gas neurotransmitters are difficult to study, as they act rapidly and are immediately broken down, existing for only a few seconds. The most prevalent transmitter is glutamate, which is excitatory at well over 90% of the synapses in the human brain. The next most prevalent is gamma-Aminobutyric Acid, or GABA, which is inhibitory at more than 90% of the synapses that do not use glutamate. Although other transmitters are used in fewer synapses, they may be very important functionally: the great majority of psychoactive drugs exert their effects by altering the actions of some neurotransmitter systems, often acting through transmitters other than glutamate or GABA. Addictive drugs such as cocaine and amphetamines exert their effects primarily on the dopamine system. The addictive opiate drugs exert their effects primarily as functional analogs of opioid peptides, which, in turn, regulate dopamine levels. List of neurotransmitters, peptides, and gaseous signaling molecules Neurotransmitter systems Neurons expressing certain types of neurotransmitters sometimes form distinct systems, where activation of the system affects large volumes of the brain, called volume transmission. Major neurotransmitter systems include the noradrenaline (norepinephrine) system, the dopamine system, the serotonin system, and the cholinergic system, among others. Trace amines have a modulatory effect on neurotransmission in monoamine pathways (i.e., dopamine, norepinephrine, and serotonin pathways) throughout the brain via signaling through trace amine-associated receptor 1. A brief comparison of these systems follows: Drug effects Understanding the effects of drugs on neurotransmitters comprises a significant portion of research initiatives in the field of neuroscience. Most neuroscientists involved in this field of research believe that such efforts may further advance our understanding of the circuits responsible for various neurological diseases and disorders, as well as ways to effectively treat and someday possibly prevent or cure such illnesses. Drugs can influence behavior by altering neurotransmitter activity. For instance, drugs can decrease the rate of synthesis of neurotransmitters by affecting the synthetic enzyme(s) for that neurotransmitter. When neurotransmitter syntheses are blocked, the amount of neurotransmitters available for release becomes substantially lower, resulting in a decrease in neurotransmitter activity. Some drugs block or stimulate the release of specific neurotransmitters. Alternatively, drugs can prevent neurotransmitter storage in synaptic vesicles by causing the synaptic vesicle membranes to leak. Drugs that prevent a neurotransmitter from binding to its receptor are called receptor antagonists. For example, drugs used to treat patients with schizophrenia such as haloperidol, chlorpromazine, and clozapine are antagonists at receptors in the brain for dopamine. Other drugs act by binding to a receptor and mimicking the normal neurotransmitter. Such drugs are called receptor agonists. An example of a receptor agonist is morphine, an opiate that mimics effects of the endogenous neurotransmitter β-endorphin to relieve pain. Other drugs interfere with the deactivation of a neurotransmitter after it has been released, thereby prolonging the action of a neurotransmitter. This can be accomplished by blocking re-uptake or inhibiting degradative enzymes. Lastly, drugs can also prevent an action potential from occurring, blocking neuronal activity throughout the central and peripheral nervous system. Drugs such as tetrodotoxin that block neural activity are typically lethal. Drugs targeting the neurotransmitter of major systems affect the whole system, which can explain the complexity of action of some drugs. Cocaine, for example, blocks the re-uptake of dopamine back into the presynaptic neuron, leaving the neurotransmitter molecules in the synaptic gap for an extended period of time. Since the dopamine remains in the synapse longer, the neurotransmitter continues to bind to the receptors on the postsynaptic neuron, eliciting a pleasurable emotional response. Physical addiction to cocaine may result from prolonged exposure to excess dopamine in the synapses, which leads to the downregulation of some post-synaptic receptors. After the effects of the drug wear off, an individual can become depressed due to decreased probability of the neurotransmitter binding to a receptor. Fluoxetine is a selective serotonin re-uptake inhibitor (SSRI), which blocks re-uptake of serotonin by the presynaptic cell which increases the amount of serotonin present at the synapse and furthermore allows it to remain there longer, providing potential for the effect of naturally released serotonin. AMPT prevents the conversion of tyrosine to L-DOPA, the precursor to dopamine; reserpine prevents dopamine storage within vesicles; and deprenyl inhibits monoamine oxidase (MAO)-B and thus increases dopamine levels. Agonists An agonist is a chemical capable of binding to a receptor, such as a neurotransmitter receptor, and initiating the same reaction typically produced by the binding of the endogenous substance. An agonist of a neurotransmitter will thus initiate the same receptor response as the transmitter. In neurons, an agonist drug may activate neurotransmitter receptors either directly or indirectly. Direct-binding agonists can be further characterized as full agonists, partial agonists, inverse agonists. Direct agonists act similar to a neurotransmitter by binding directly to its associated receptor site(s), which may be located on the presynaptic neuron or postsynaptic neuron, or both. Typically, neurotransmitter receptors are located on the postsynaptic neuron, while neurotransmitter autoreceptors are located on the presynaptic neuron, as is the case for monoamine neurotransmitters; in some cases, a neurotransmitter utilizes retrograde neurotransmission, a type of feedback signaling in neurons where the neurotransmitter is released postsynaptically and binds to target receptors located on the presynaptic neuron. Nicotine, a compound found in tobacco, is a direct agonist of most nicotinic acetylcholine receptors, mainly located in cholinergic neurons. Opiates, such as morphine, heroin, hydrocodone, oxycodone, codeine, and methadone, are μ-opioid receptor agonists; this action mediates their euphoriant and pain relieving properties. Indirect agonists increase the binding of neurotransmitters at their target receptors by stimulating the release or preventing the reuptake of neurotransmitters. Some indirect agonists trigger neurotransmitter release and prevent neurotransmitter reuptake. Amphetamine, for example, is an indirect agonist of postsynaptic dopamine, norepinephrine, and serotonin receptors in each their respective neurons; it produces both neurotransmitter release into the presynaptic neuron and subsequently the synaptic cleft and prevents their reuptake from the synaptic cleft by activating TAAR1, a presynaptic G protein-coupled receptor, and binding to a site on VMAT2, a type of monoamine transporter located on synaptic vesicles within monoamine neurons. Antagonists An antagonist is a chemical that acts within the body to reduce the physiological activity of another chemical substance (such as an opiate); especially one that opposes the action on the nervous system of a drug or a substance occurring naturally in the body by combining with and blocking its nervous receptor. There are two main types of antagonist: direct-acting Antagonist and indirect-acting Antagonists: Direct-acting antagonist- which takes up space present on receptors which are otherwise taken up by neurotransmitters themselves. This results in neurotransmitters being blocked from binding to the receptors. An example of one of the most common is called Atropine. Indirect-acting antagonist- drugs that inhibit the release/production of neurotransmitters (e.g., Reserpine). Drug antagonists An antagonist drug is one that attaches (or binds) to a site called a receptor without activating that receptor to produce a biological response. It is therefore said to have no intrinsic activity. An antagonist may also be called a receptor "blocker" because they block the effect of an agonist at the site. The pharmacological effects of an antagonist, therefore, result in preventing the corresponding receptor site's agonists (e.g., drugs, hormones, neurotransmitters) from binding to and activating it. Antagonists may be "competitive" or "irreversible". A competitive antagonist competes with an agonist for binding to the receptor. As the concentration of antagonist increases, the binding of the agonist is progressively inhibited, resulting in a decrease in the physiological response. High concentration of an antagonist can completely inhibit the response. This inhibition can be reversed, however, by an increase of the concentration of the agonist, since the agonist and antagonist compete for binding to the receptor. Competitive antagonists, therefore, can be characterized as shifting the dose–response relationship for the agonist to the right. In the presence of a competitive antagonist, it takes an increased concentration of the agonist to produce the same response observed in the absence of the antagonist. An irreversible antagonist binds so strongly to the receptor as to render the receptor unavailable for binding to the agonist. Irreversible antagonists may even form covalent chemical bonds with the receptor. In either case, if the concentration of the irreversible antagonist is high enough, the number of unbound receptors remaining for agonist binding may be so low that even high concentrations of the agonist do not produce the maximum biological response. Precursors While intake of neurotransmitter precursors does increase neurotransmitter synthesis, evidence is mixed as to whether neurotransmitter release and postsynaptic receptor firing is increased. Even with increased neurotransmitter release, it is unclear whether this will result in a long-term increase in neurotransmitter signal strength, since the nervous system can adapt to changes such as increased neurotransmitter synthesis and may therefore maintain constant firing. Some neurotransmitters may have a role in depression and there is some evidence to suggest that intake of precursors of these neurotransmitters may be useful in the treatment of mild and moderate depression. Catecholamine and trace amine precursors L-DOPA, a precursor of dopamine that crosses the blood–brain barrier, is used in the treatment of Parkinson's disease. For depressed patients where low activity of the neurotransmitter norepinephrine is implicated, there is only little evidence for benefit of neurotransmitter precursor administration. L-phenylalanine and L-tyrosine are both precursors for dopamine, norepinephrine, and epinephrine. These conversions require vitamin B6, vitamin C, and S-adenosylmethionine. A few studies suggest potential antidepressant effects of L-phenylalanine and L-tyrosine, but there is much room for further research in this area. Serotonin precursors Administration of L-tryptophan, a precursor for serotonin, is seen to double the production of serotonin in the brain. It is significantly more effective than a placebo in the treatment of mild and moderate depression. This conversion requires vitamin C. 5-hydroxytryptophan (5-HTP), also a precursor for serotonin, is more effective than a placebo. Diseases and disorders Diseases and disorders may also affect specific neurotransmitter systems. The following are disorders involved in either an increase, decrease, or imbalance of certain neurotransmitters. Dopamine For example, problems in producing dopamine (mainly in the substantia nigra) can result in Parkinson's disease, a disorder that affects a person's ability to move as they want to, resulting in stiffness, tremors or shaking, and other symptoms. Some studies suggest that having too little or too much dopamine or problems using dopamine in the thinking and feeling regions of the brain may play a role in disorders like schizophrenia or attention deficit hyperactivity disorder (ADHD). Dopamine is also involved in addiction and drug use, as most recreational drugs cause an influx of dopamine in the brain (especially opioid and methamphetamines) that produces a pleasurable feeling, which is why users constantly crave drugs. Serotonin Similarly, after some research suggested that drugs that block the recycling, or reuptake, of serotonin seemed to help some people diagnosed with depression, it was theorized that people with depression might have lower-than-normal serotonin levels. Though widely popularized, this theory was not borne out in subsequent research. Therefore, selective serotonin reuptake inhibitors (SSRIs) are used to increase the amounts of serotonin in synapses. Glutamate Furthermore, problems with producing or using glutamate have been suggestively and tentatively linked to many mental disorders, including autism, obsessive–compulsive disorder (OCD), schizophrenia, and depression. Having too much glutamate has been linked to neurological diseases such as Parkinson's disease, multiple sclerosis, Alzheimer's disease, stroke, and ALS (amyotrophic lateral sclerosis). Neurotransmitter imbalance Generally, there are no scientifically established "norms" for appropriate levels or "balances" of different neurotransmitters. In most cases, it is practically impossible to measure neurotransmitter levels in the brain or body at any given moment. Neurotransmitters regulate each other's release, and weak consistent imbalances in this mutual regulation were linked to temperament in healthy people. However, significant imbalances or disruptions in neurotransmitter systems are associated with various diseases and mental disorders, including Parkinson's disease, depression, insomnia, Attention Deficit Hyperactivity Disorder (ADHD), anxiety, memory loss, dramatic weight changes, and addictions. Some of these conditions are also related to neurotransmitter switching, a phenomenon where neurons change the type of neurotransmitters they release. Chronic physical or emotional stress can be a contributor to neurotransmitter system changes. Genetics also plays a role in neurotransmitter activities. Apart from recreational use, medications that directly and indirectly interact with one or more transmitter or its receptor are commonly prescribed for psychiatric and psychological issues. Notably, drugs interacting with serotonin and norepinephrine are prescribed to patients with problems such as depression and anxiety—though the notion that there is much solid medical evidence to support such interventions has been widely criticized. Studies shown that dopamine imbalance has an influence on multiple sclerosis and other neurological disorders.
Biology and health sciences
Neurotransmitters
Biology
21869
https://en.wikipedia.org/wiki/Neutron%20star
Neutron star
A neutron star is the collapsed core of a massive supergiant star. It results from the supernova explosion of a massive star—combined with gravitational collapse—that compresses the core past white dwarf star density to that of atomic nuclei. Surpassed only by black holes, neutron stars are the second smallest and densest known class of stellar objects. Neutron stars have a radius on the order of and a mass of about . Stars that collapse into neutron stars have a total mass of between 10 and 25 solar masses (), or possibly more for those that are especially rich in elements heavier than hydrogen and helium. Once formed, neutron stars no longer actively generate heat and cool over time, but they may still evolve further through collisions or accretion. Most of the basic models for these objects imply that they are composed almost entirely of neutrons, as the extreme pressure causes the electrons and protons present in normal matter to combine into additional neutrons. These stars are partially supported against further collapse by neutron degeneracy pressure, just as white dwarfs are supported against collapse by electron degeneracy pressure. However, this is not by itself sufficient to hold up an object beyond and repulsive nuclear forces increasingly contribute to supporting more massive neutron stars. If the remnant star has a mass exceeding the Tolman–Oppenheimer–Volkoff limit, which ranges from the combination of degeneracy pressure and nuclear forces is insufficient to support the neutron star, causing it to collapse and form a black hole. The most massive neutron star detected so far, PSR J0952–0607, is estimated to be . Newly formed neutron stars may have surface temperatures of ten million K or more. However, since neutron stars generate no new heat through fusion, they inexorably cool down after their formation. Consequently, a given neutron star reaches a surface temperature of one million K when it is between one thousand and one million years old. Older and even-cooler neutron stars are still easy to discover. For example, the well-studied neutron star, has an average surface temperature of about 434,000 K. For comparison, the Sun has an effective surface temperature of 5,780 K. Neutron star material is remarkably dense: a normal-sized matchbox containing neutron-star material would have a weight of approximately 3 billion tonnes, the same weight as a 0.5-cubic-kilometer chunk of the Earth (a cube with edges of about 800 meters) from Earth's surface. As a star's core collapses, its rotation rate increases due to conservation of angular momentum, so newly formed neutron stars typically rotate at up to several hundred times per second. Some neutron stars emit beams of electromagnetic radiation that make them detectable as pulsars, and the discovery of pulsars by Jocelyn Bell Burnell and Antony Hewish in 1967 was the first observational suggestion that neutron stars exist. The fastest-spinning neutron star known is PSR J1748−2446ad, rotating at a rate of 716 times per second or 43,000 revolutions per minute, giving a linear (tangential) speed at the surface on the order of 0.24c (i.e., nearly a quarter the speed of light). There are thought to be around one billion neutron stars in the Milky Way, and at a minimum several hundred million, a figure obtained by estimating the number of stars that have undergone supernova explosions. However, many of them have existed for a long period of time and have cooled down considerably. These stars radiate very little electromagnetic radiation; most neutron stars that have been detected occur only in certain situations in which they do radiate, such as if they are a pulsar or a part of a binary system. Slow-rotating and non-accreting neutron stars are difficult to detect, due to the absence of electromagnetic radiation; however, since the Hubble Space Telescope's detection of RX J1856.5−3754 in the 1990s, a few nearby neutron stars that appear to emit only thermal radiation have been detected. Neutron stars in binary systems can undergo accretion, in which case they emit large amounts of X-rays. During this process, matter is deposited on the surface of the stars, forming "hotspots" that can be sporadically identified as X-ray pulsar systems. Additionally, such accretions are able to "recycle" old pulsars, causing them to gain mass and rotate extremely quickly, forming millisecond pulsars. Furthermore, binary systems such as these continue to evolve, with many companions eventually becoming compact objects such as white dwarfs or neutron stars themselves, though other possibilities include a complete destruction of the companion through ablation or collision. The study of neutron star systems is central to gravitational wave astronomy. The merger of binary neutron stars produces gravitational waves and may be associated with kilonovae and short-duration gamma-ray bursts. In 2017, the LIGO and Virgo interferometer sites observed GW170817, the first direct detection of gravitational waves from such an event. Prior to this, indirect evidence for gravitational waves was inferred by studying the gravity radiated from the orbital decay of a different type of (unmerged) binary neutron system, the Hulse–Taylor pulsar. Formation Any main-sequence star with an initial mass of greater than (eight times the mass of the Sun) has the potential to become a neutron star. As the star evolves away from the main sequence, stellar nucleosynthesis produces an iron-rich core. When all nuclear fuel in the core has been exhausted, the core must be supported by degeneracy pressure alone. Further deposits of mass from shell burning cause the core to exceed the Chandrasekhar limit. Electron-degeneracy pressure is overcome, and the core collapses further, causing temperatures to rise to over (5 billion K). At these temperatures, photodisintegration (the breakdown of iron nuclei into alpha particles due to high-energy gamma rays) occurs. As the temperature of the core continues to rise, electrons and protons combine to form neutrons via electron capture, releasing a flood of neutrinos. When densities reach a nuclear density of , a combination of strong force repulsion and neutron degeneracy pressure halts the contraction. The contracting outer envelope of the star is halted and rapidly flung outwards by a flux of neutrinos produced in the creation of the neutrons, resulting in a supernova and leaving behind a neutron star. However, if the remnant has a mass greater than about , it instead becomes a black hole. As the core of a massive star is compressed during a Type II supernova or a Type Ib or Type Ic supernova, and collapses into a neutron star, it retains most of its angular momentum. Because it has only a tiny fraction of its parent's radius (sharply reducing its moment of inertia), a neutron star is formed with very high rotation speed and then, over a very long period, it slows. Neutron stars are known that have rotation periods from about 1.4 ms to 30 s. The neutron star's density also gives it very high surface gravity, with typical values ranging from to (more than times that of Earth). One measure of such immense gravity is the fact that neutron stars have an escape velocity of over half the speed of light. The neutron star's gravity accelerates infalling matter to tremendous speed, and tidal forces near the surface can cause spaghettification. Properties Equation of state The equation of state of neutron stars is not currently known. This is because neutron stars are the second most dense known object in the universe, only less dense than black holes. The extreme density means there is no way to replicate the material on earth in laboratories, which is how equations of state for other things like ideal gases are tested. The closest neutron star is many parsecs away, meaning there is no feasible way to study it directly. While it is known neutron stars should be similar to a degenerate gas, it cannot be modeled strictly like one (as white dwarfs are) because of the extreme gravity. General relativity must be considered for the neutron star equation of state because Newtonian gravity is no longer sufficient in those conditions. Effects such as quantum chromodynamics (QCD), superconductivity, and superfluidity must also be considered. At the extraordinarily high densities of neutron stars, ordinary matter is squeezed to nuclear densities. Specifically, the matter ranges from nuclei embedded in a sea of electrons at low densities in the outer crust, to increasingly neutron-rich structures in the inner crust, to the extremely neutron-rich uniform matter in the outer core, and possibly exotic states of matter at high densities in the inner core. Understanding the nature of the matter present in the various layers of neutron stars, and the phase transitions that occur at the boundaries of the layers is a major unsolved problem in fundamental physics. The neutron star equation of state encodes information about the structure of a neutron star and thus tells us how matter behaves at the extreme densities found inside neutron stars. Constraints on the neutron star equation of state would then provide constraints on how the strong force of the standard model works, which would have profound implications for nuclear and atomic physics. This makes neutron stars natural laboratories for probing fundamental physics. For example, the exotic states that may be found at the cores of neutron stars are types of QCD matter. At the extreme densities at the centers of neutron stars, neutrons become disrupted giving rise to a sea of quarks. This matter's equation of state is governed by the laws of quantum chromodynamics and since QCD matter cannot be produced in any laboratory on Earth, most of the current knowledge about it is only theoretical. Different equations of state lead to different values of observable quantities. While the equation of state is only directly relating the density and pressure, it also leads to calculating observables like the speed of sound, mass, radius, and Love numbers. Because the equation of state is unknown, there are many proposed ones, such as FPS, UU, APR, L, and SLy, and it is an active area of research. Different factors can be considered when creating the equation of state such as phase transitions. Another aspect of the equation of state is whether it is a soft or stiff equation of state. This relates to how much pressure there is at a certain energy density, and often corresponds to phase transitions. When the material is about to go through a phase transition, the pressure will tend to increase until it shifts into a more comfortable state of matter. A soft equation of state would have a gently rising pressure versus energy density while a stiff one would have a sharper rise in pressure. In neutron stars, nuclear physicists are still testing whether the equation of state should be stiff or soft, and sometimes it changes within individual equations of state depending on the phase transitions within the model. This is referred to as the equation of state stiffening or softening, depending on the previous behavior. Since it is unknown what neutron stars are made of, there is room for different phases of matter to be explored within the equation of state. Density and pressure Neutron stars have overall densities of to ( to times the density of the Sun), which is comparable to the approximate density of an atomic nucleus of . The density increases with depth, varying from about at the crust to an estimated or deeper inside. Pressure increases accordingly, from about (32 QPa) at the inner crust to in the center. A neutron star is so dense that one teaspoon (5 milliliters) of its material would have a mass over , about 900 times the mass of the Great Pyramid of Giza. The entire mass of the Earth at neutron star density would fit into a sphere 305 m in diameter, about the size of the Arecibo Telescope. In popular scientific writing, neutron stars are sometimes described as macroscopic atomic nuclei. Indeed, both states are composed of nucleons, and they share a similar density to within an order of magnitude. However, in other respects, neutron stars and atomic nuclei are quite different. A nucleus is held together by the strong interaction, whereas a neutron star is held together by gravity. The density of a nucleus is uniform, while neutron stars are predicted to consist of multiple layers with varying compositions and densities. Current constraints Because equations of state for neutron stars lead to different observables, such as different mass-radius relations, there are many astronomical constraints on equations of state. These come mostly from LIGO, which is a gravitational wave observatory, and NICER, which is an X-ray telescope. NICER's observations of pulsars in binary systems, from which the pulsar mass and radius can be estimated, can constrain the neutron star equation of state. A 2021 measurement of the pulsar PSR J0740+6620 was able to constrain the radius of a 1.4 solar mass neutron star to km with 95% confidence. These mass-radius constraints, combined with chiral effective field theory calculations, tightens constraints on the neutron star equation of state. Equation of state constraints from LIGO gravitational wave detections start with nuclear and atomic physics researchers, who work to propose theoretical equations of state (such as FPS, UU, APR, L, SLy, and others). The proposed equations of state can then be passed onto astrophysics researchers who run simulations of binary neutron star mergers. From these simulations, researchers can extract gravitational waveforms, thus studying the relationship between the equation of state and gravitational waves emitted by binary neutron star mergers. Using these relations, one can constrain the neutron star equation of state when gravitational waves from binary neutron star mergers are observed. Past numerical relativity simulations of binary neutron star mergers have found relationships between the equation of state and frequency dependent peaks of the gravitational wave signal that can be applied to LIGO detections. For example, the LIGO detection of the binary neutron star merger GW170817 provided limits on the tidal deformability of the two neutron stars which dramatically reduced the family of allowed equations of state. Future gravitational wave signals with next generation detectors like Cosmic Explorer can impose further constraints. When nuclear physicists are trying to understand the likelihood of their equation of state, it is good to compare with these constraints to see if it predicts neutron stars of these masses and radii. There is also recent work on constraining the equation of state with the speed of sound through hydrodynamics. Tolman-Oppenheimer-Volkoff Equation The Tolman-Oppenheimer-Volkoff (TOV) equation can be used to describe a neutron star. The equation is a solution to Einstein's equations from general relativity for a spherically symmetric, time invariant metric. With a given equation of state, solving the equation leads to observables such as the mass and radius. There are many codes that numerically solve the TOV equation for a given equation of state to find the mass-radius relation and other observables for that equation of state. The following differential equations can be solved numerically to find the neutron star observables: where  is the gravitational constant,  is the pressure,  is the energy density (found from the equation of state), and  is the speed of light. Mass-Radius relation Using the TOV equations and an equation of state, a mass-radius curve can be found. The idea is that for the correct equation of state, every neutron star that could possibly exist would lie along that curve. This is one of the ways equations of state can be constrained by astronomical observations. To create these curves, one must solve the TOV equations for different central densities. For each central density, you numerically solve the mass and pressure equations until the pressure goes to zero, which is the outside of the star. Each solution gives a corresponding mass and radius for that central density. Mass-radius curves determine what the maximum mass is for a given equation of state. Through most of the mass-radius curve, each radius corresponds to a unique mass value. At a certain point, the curve will reach a maximum and start going back down, leading to repeated mass values for different radii. This maximum point is what is known as the maximum mass. Beyond that mass, the star will no longer be stable, i.e. no longer be able to hold itself up against the force of gravity, and would collapse into a black hole. Since each equation of state leads to a different mass-radius curve, they also lead to a unique maximum mass value. The maximum mass value is unknown as long as the equation of state remains unknown. This is very important when it comes to constraining the equation of state. Oppenheimer and Volkoff came up with the Tolman-Oppenheimer-Volkoff limit using a degenerate gas equation of state with the TOV equations that was ~0.7 Solar masses. Since the neutron stars that have been observed are more massive than that, that maximum mass was discarded. The most recent massive neutron star that was observed was PSR J0952-0607 which was solar masses. Any equation of state with a mass less than that would not predict that star and thus is much less likely to be correct. An interesting phenomenon in this area of astrophysics relating to the maximum mass of neutron stars is what is called the "mass gap". The mass gap refers to a range of masses from roughly 2-5 solar masses where very few compact objects were observed. This range is based on the current assumed maximum mass of neutron stars (~2 solar masses) and the minimum black hole mass (~5 solar masses). Recently, some objects have been discovered that fall in that mass gap from gravitational wave detections. If the true maximum mass of neutron stars was known, it would help characterize compact objects in that mass range as either neutron stars or black holes. I-Love-Q Relations There are three more properties of neutron stars that are dependent on the equation of state but can also be astronomically observed: the moment of inertia, the quadrupole moment, and the Love number. The moment of inertia of a neutron star describes how fast the star can rotate at a fixed spin momentum. The quadrupole moment of a neutron star specifies how much that star is deformed out of its spherical shape. The Love number of the neutron star represents how easy or difficult it is to deform the star due to tidal forces, typically important in binary systems. While these properties depend on the material of the star and therefore on the equation of state, there is a relation between these three quantities that is independent of the equation of state. This relation assumes slowly and uniformly rotating stars and uses general relativity to derive the relation. While this relation would not be able to add constraints to the equation of state, since it is independent of the equation of state, it does have other applications. If one of these three quantities can be measured for a particular neutron star, this relation can be used to find the other two. In addition, this relation can be used to break the degeneracies in detections by gravitational wave detectors of the quadrupole moment and spin, allowing the average spin to be determined within a certain confidence level. Temperature The temperature inside a newly formed neutron star is from around to . However, the huge number of neutrinos it emits carries away so much energy that the temperature of an isolated neutron star falls within a few years to around . At this lower temperature, most of the light generated by a neutron star is in X-rays. Some researchers have proposed a neutron star classification system using Roman numerals (not to be confused with the Yerkes luminosity classes for non-degenerate stars) to sort neutron stars by their mass and cooling rates: type I for neutron stars with low mass and cooling rates, type II for neutron stars with higher mass and cooling rates, and a proposed type III for neutron stars with even higher mass, approaching , and with higher cooling rates and possibly candidates for exotic stars. Magnetic field The magnetic field strength on the surface of neutron stars ranges from to  tesla (T). These are orders of magnitude higher than in any other object: for comparison, a continuous 16 T field has been achieved in the laboratory and is sufficient to levitate a living frog due to diamagnetic levitation. Variations in magnetic field strengths are most likely the main factor that allows different types of neutron stars to be distinguished by their spectra, and explains the periodicity of pulsars. The neutron stars known as magnetars have the strongest magnetic fields, in the range of to , and have become the widely accepted hypothesis for neutron star types soft gamma repeaters (SGRs) and anomalous X-ray pulsars (AXPs). The magnetic energy density of a field is extreme, greatly exceeding the mass-energy density of ordinary matter. Fields of this strength are able to polarize the vacuum to the point that the vacuum becomes birefringent. Photons can merge or split in two, and virtual particle-antiparticle pairs are produced. The field changes electron energy levels and atoms are forced into thin cylinders. Unlike in an ordinary pulsar, magnetar spin-down can be directly powered by its magnetic field, and the magnetic field is strong enough to stress the crust to the point of fracture. Fractures of the crust cause starquakes, observed as extremely luminous millisecond hard gamma ray bursts. The fireball is trapped by the magnetic field, and comes in and out of view when the star rotates, which is observed as a periodic soft gamma repeater (SGR) emission with a period of 5–8 seconds and which lasts for a few minutes. The origins of the strong magnetic field are as yet unclear. One hypothesis is that of "flux freezing", or conservation of the original magnetic flux during the formation of the neutron star. If an object has a certain magnetic flux over its surface area, and that area shrinks to a smaller area, but the magnetic flux is conserved, then the magnetic field would correspondingly increase. Likewise, a collapsing star begins with a much larger surface area than the resulting neutron star, and conservation of magnetic flux would result in a far stronger magnetic field. However, this simple explanation does not fully explain magnetic field strengths of neutron stars. Gravity The gravitational field at a neutron star's surface is about times stronger than on Earth, at around . Such a strong gravitational field acts as a gravitational lens and bends the radiation emitted by the neutron star such that parts of the normally invisible rear surface become visible. If the radius of the neutron star is 3GM/c2 or less, then the photons may be trapped in an orbit, thus making the whole surface of that neutron star visible from a single vantage point, along with destabilizing photon orbits at or below the 1 radius distance of the star. A fraction of the mass of a star that collapses to form a neutron star is released in the supernova explosion from which it forms (from the law of mass–energy equivalence, ). The energy comes from the gravitational binding energy of a neutron star. Hence, the gravitational force of a typical neutron star is huge. If an object were to fall from a height of one meter on a neutron star 12 kilometers in radius, it would reach the ground at around 1,400 kilometers per second. However, even before impact, the tidal force would cause spaghettification, breaking any sort of an ordinary object into a stream of material. Because of the enormous gravity, time dilation between a neutron star and Earth is significant. For example, eight years could pass on the surface of a neutron star, yet ten years would have passed on Earth, not including the time-dilation effect of the star's very rapid rotation. Neutron star relativistic equations of state describe the relation of radius vs. mass for various models. The most likely radii for a given neutron star mass are bracketed by models AP4 (smallest radius) and MS2 (largest radius). EB is the ratio of gravitational binding energy mass equivalent to the observed neutron star gravitational mass of M kilograms with radius R meters, Given current values and star masses "M" commonly reported as multiples of one solar mass, then the relativistic fractional binding energy of a neutron star is A neutron star would not be more compact than 10,970 meters radius (AP4 model). Its mass fraction gravitational binding energy would then be 0.187, −18.7% (exothermic). This is not near 0.6/2 = 0.3, −30%. Structure Current understanding of the structure of neutron stars is defined by existing mathematical models, but it might be possible to infer some details through studies of neutron-star oscillations. Asteroseismology, a study applied to ordinary stars, can reveal the inner structure of neutron stars by analyzing observed spectra of stellar oscillations. Current models indicate that matter at the surface of a neutron star is composed of ordinary atomic nuclei crushed into a solid lattice with a sea of electrons flowing through the gaps between them. It is possible that the nuclei at the surface are iron, due to iron's high binding energy per nucleon. It is also possible that heavy elements, such as iron, simply sink beneath the surface, leaving only light nuclei like helium and hydrogen. If the surface temperature exceeds (as in the case of a young pulsar), the surface should be fluid instead of the solid phase that might exist in cooler neutron stars (temperature <). The "atmosphere" of a neutron star is hypothesized to be at most several micrometers thick, and its dynamics are fully controlled by the neutron star's magnetic field. Below the atmosphere one encounters a solid "crust". This crust is extremely hard and very smooth (with maximum surface irregularities on the order of millimeters or less), due to the extreme gravitational field. Proceeding inward, one encounters nuclei with ever-increasing numbers of neutrons; such nuclei would decay quickly on Earth, but are kept stable by tremendous pressures. As this process continues at increasing depths, the neutron drip becomes overwhelming, and the concentration of free neutrons increases rapidly. After a supernova explosion of a supergiant star, neutron stars are born from the remnants. A neutron star is composed mostly of neutrons (neutral particles) and contains a small fraction of protons (positively charged particles) and electrons (negatively charged particles), as well as nuclei. In the extreme density of a neutron star, many neutrons are free neutrons, meaning they are not bound in atomic nuclei and move freely within the star's dense matter, especially in the densest regions of the star—the inner crust and core. Over the star's lifetime, as its density increases, the energy of the electrons also increases, which generates more neutrons. In neutron stars, the neutron drip is the transition point where nuclei become so neutron-rich that they can no longer hold additional neutrons, leading to a sea of free neutrons being formed. The sea of neutrons formed after neutron drip provides additional pressure support, which helps maintain the star's structural integrity and prevents gravitational collapse. The neutron drip takes place within the inner crust of the neutron star and starts when the density becomes so high that nuclei can no longer hold additional neutrons. At the beginning of the neutron drip, the pressure in the star from neutrons, electrons, and the total pressure is roughly equal. As the density of the neutron star increases, the nuclei break down, and the neutron pressure of the star becomes dominant. When the density reaches a point where nuclei touch and subsequently merge, they form a fluid of neutrons with a sprinkle of electrons and protons. This transition marks the neutron drip, where the dominant pressure in the neutron star shifts from degenerate electrons to neutrons. At very high densities, the neutron pressure becomes the primary pressure holding up the star, with neutrons being non-relativistic (moving slower than the speed of light) and extremely compressed. However, at extremely high densities, neutrons begin to move at relativistic speeds (close to the speed of light). These high speeds significantly increase the star's overall pressure, altering the star's equilibrium state, and potentially leading to the formation of exotic states of matter. In that region, there are nuclei, free electrons, and free neutrons. The nuclei become increasingly small (gravity and pressure overwhelming the strong force) until the core is reached, by definition the point where mostly neutrons exist. The expected hierarchy of phases of nuclear matter in the inner crust has been characterized as "nuclear pasta", with fewer voids and larger structures towards higher pressures. The composition of the superdense matter in the core remains uncertain. One model describes the core as superfluid neutron-degenerate matter (mostly neutrons, with some protons and electrons). More exotic forms of matter are possible, including degenerate strange matter (containing strange quarks in addition to up and down quarks), matter containing high-energy pions and kaons in addition to neutrons, or ultra-dense quark-degenerate matter. Radiation Pulsars Neutron stars are detected from their electromagnetic radiation. Neutron stars are usually observed to pulse radio waves and other electromagnetic radiation, and neutron stars observed with pulses are called pulsars. Pulsars' radiation is thought to be caused by particle acceleration near their magnetic poles, which need not be aligned with the rotational axis of the neutron star. It is thought that a large electrostatic field builds up near the magnetic poles, leading to electron emission. These electrons are magnetically accelerated along the field lines, leading to curvature radiation, with the radiation being strongly polarized towards the plane of curvature. In addition, high-energy photons can interact with lower-energy photons and the magnetic field for electron−positron pair production, which through electron–positron annihilation leads to further high-energy photons. The radiation emanating from the magnetic poles of neutron stars can be described as magnetospheric radiation, in reference to the magnetosphere of the neutron star. It is not to be confused with magnetic dipole radiation, which is emitted because the magnetic axis is not aligned with the rotational axis, with a radiation frequency the same as the neutron star's rotational frequency. If the axis of rotation of the neutron star is different from the magnetic axis, external viewers will only see these beams of radiation whenever the magnetic axis point towards them during the neutron star rotation. Therefore, periodic pulses are observed, at the same rate as the rotation of the neutron star. In May 2022, astronomers reported an ultra-long-period radio-emitting neutron star PSR J0901-4046, with spin properties distinct from the known neutron stars. It is unclear how its radio emission is generated, and it challenges the current understanding of how pulsars evolve. Non-pulsating neutron stars In addition to pulsars, non-pulsating neutron stars have also been identified, although they may have minor periodic variation in luminosity. This seems to be a characteristic of the X-ray sources known as Central Compact Objects in supernova remnants (CCOs in SNRs), which are thought to be young, radio-quiet isolated neutron stars. Spectra In addition to radio emissions, neutron stars have also been identified in other parts of the electromagnetic spectrum. This includes visible light, near infrared, ultraviolet, X-rays, and gamma rays. Pulsars observed in X-rays are known as X-ray pulsars if accretion-powered, while those identified in visible light are known as optical pulsars. The majority of neutron stars detected, including those identified in optical, X-ray, and gamma rays, also emit radio waves; the Crab Pulsar produces electromagnetic emissions across the spectrum. However, there exist neutron stars called radio-quiet neutron stars, with no radio emissions detected. Rotation Neutron stars rotate extremely rapidly after their formation due to the conservation of angular momentum; in analogy to spinning ice skaters pulling in their arms, the slow rotation of the original star's core speeds up as it shrinks. A newborn neutron star can rotate many times a second. Spin down Over time, neutron stars slow, as their rotating magnetic fields in effect radiate energy associated with the rotation; older neutron stars may take several seconds for each revolution. This is called spin down. The rate at which a neutron star slows its rotation is usually constant and very small. The periodic time (P) is the rotational period, the time for one rotation of a neutron star. The spin-down rate, the rate of slowing of rotation, is then given the symbol (P-dot), the derivative of P with respect to time. It is defined as periodic time increase per unit time; it is a dimensionless quantity, but can be given the units of s⋅s−1 (seconds per second). The spin-down rate (P-dot) of neutron stars usually falls within the range of to , with the shorter period (or faster rotating) observable neutron stars usually having smaller P-dot. As a neutron star ages, its rotation slows (as P increases); eventually, the rate of rotation will become too slow to power the radio-emission mechanism, so radio emission from the neutron star no longer can be detected. P and P-dot allow minimum magnetic fields of neutron stars to be estimated. P and P-dot can be also used to calculate the characteristic age of a pulsar, but gives an estimate which is somewhat larger than the true age when it is applied to young pulsars. P and P-dot can also be combined with neutron star's moment of inertia to estimate a quantity called spin-down luminosity, which is given the symbol (E-dot). It is not the measured luminosity, but rather the calculated loss rate of rotational energy that would manifest itself as radiation. For neutron stars where the spin-down luminosity is comparable to the actual luminosity, the neutron stars are said to be "rotation powered". The observed luminosity of the Crab Pulsar is comparable to the spin-down luminosity, supporting the model that rotational kinetic energy powers the radiation from it. With neutron stars such as magnetars, where the actual luminosity exceeds the spin-down luminosity by about a factor of one hundred, it is assumed that the luminosity is powered by magnetic dissipation, rather than being rotation powered. P and P-dot can also be plotted for neutron stars to create a P–P-dot diagram. It encodes a tremendous amount of information about the pulsar population and its properties, and has been likened to the Hertzsprung–Russell diagram in its importance for neutron stars. Spin up Neutron star rotational speeds can increase, a process known as spin up. Sometimes neutron stars absorb orbiting matter from companion stars, increasing the rotation rate and reshaping the neutron star into an oblate spheroid. This causes an increase in the rate of rotation of the neutron star of over a hundred times per second in the case of millisecond pulsars. The most rapidly rotating neutron star currently known, PSR J1748-2446ad, rotates at 716 revolutions per second. A 2007 paper reported the detection of an X-ray burst oscillation, which provides an indirect measure of spin, of 1122 Hz from the neutron star XTE J1739-285, suggesting 1122 rotations a second. However, at present, this signal has only been seen once, and should be regarded as tentative until confirmed in another burst from that star. Glitches and starquakes Sometimes a neutron star will undergo a glitch, a sudden small increase of its rotational speed or spin up. Glitches are thought to be the effect of a starquake—as the rotation of the neutron star slows, its shape becomes more spherical. Due to the stiffness of the "neutron" crust, this happens as discrete events when the crust ruptures, creating a starquake similar to earthquakes. After the starquake, the star will have a smaller equatorial radius, and because angular momentum is conserved, its rotational speed has increased. Starquakes occurring in magnetars, with a resulting glitch, is the leading hypothesis for the gamma-ray sources known as soft gamma repeaters. Recent work, however, suggests that a starquake would not release sufficient energy for a neutron star glitch; it has been suggested that glitches may instead be caused by transitions of vortices in the theoretical superfluid core of the neutron star from one metastable energy state to a lower one, thereby releasing energy that appears as an increase in the rotation rate. Anti-glitches An anti-glitch, a sudden small decrease in rotational speed, or spin down, of a neutron star has also been reported. It occurred in the magnetar 1E 2259+586, that in one case produced an X-ray luminosity increase of a factor of 20, and a significant spin-down rate change. Current neutron star models do not predict this behavior. If the cause were internal this suggests differential rotation of the solid outer crust and the superfluid component of the magnetar's inner structure. Population and distances At present, there are about 3,200 known neutron stars in the Milky Way and the Magellanic Clouds, the majority of which have been detected as radio pulsars. Neutron stars are mostly concentrated along the disk of the Milky Way, although the spread perpendicular to the disk is large because the supernova explosion process can impart high translational speeds (400 km/s) to the newly formed neutron star. Some of the closest known neutron stars are RX J1856.5−3754, which is about 400 light-years from Earth, and PSR J0108−1431 about 424 light-years. RX J1856.5-3754 is a member of a close group of neutron stars called The Magnificent Seven. Another nearby neutron star that was detected transiting the backdrop of the constellation Ursa Minor has been nicknamed Calvera by its Canadian and American discoverers, after the villain in the 1960 film The Magnificent Seven. This rapidly moving object was discovered using the ROSAT Bright Source Catalog. Neutron stars are only detectable with modern technology during the earliest stages of their lives (almost always less than 1 million years) and are vastly outnumbered by older neutron stars that would only be detectable through their blackbody radiation and gravitational effects on other stars. Binary neutron star systems About 5% of all known neutron stars are members of a binary system. The formation and evolution of binary neutron stars and double neutron stars can be a complex process. Neutron stars have been observed in binaries with ordinary main-sequence stars, red giants, white dwarfs, or other neutron stars. According to modern theories of binary evolution, it is expected that neutron stars also exist in binary systems with black hole companions. The merger of binaries containing two neutron stars, or a neutron star and a black hole, has been observed through the emission of gravitational waves. X-ray binaries Binary systems containing neutron stars often emit X-rays, which are emitted by hot gas as it falls towards the surface of the neutron star. The source of the gas is the companion star, the outer layers of which can be stripped off by the gravitational force of the neutron star if the two stars are sufficiently close. As the neutron star accretes this gas, its mass can increase; if enough mass is accreted, the neutron star may collapse into a black hole. Neutron star binary mergers and nucleosynthesis The distance between two neutron stars in a close binary system is observed to shrink as gravitational waves are emitted. Ultimately, the neutron stars will come into contact and coalesce. The coalescence of binary neutron stars is one of the leading models for the origin of short gamma-ray bursts. Strong evidence for this model came from the observation of a kilonova associated with the short-duration gamma-ray burst GRB 130603B, and was finally confirmed by detection of gravitational wave GW170817 and short GRB 170817A by LIGO, Virgo, and 70 observatories covering the electromagnetic spectrum observing the event. The light emitted in the kilonova is believed to come from the radioactive decay of material ejected in the merger of the two neutron stars. The merger momentarily creates an environment of such extreme neutron flux that the r-process can occur; this—as opposed to supernova nucleosynthesis—may be responsible for the production of around half the isotopes in chemical elements beyond iron. Planets Neutron stars can host exoplanets. These can be original, circumbinary, captured, or the result of a second round of planet formation. Pulsars can also strip the atmosphere off from a star, leaving a planetary-mass remnant, which may be understood as a chthonian planet or a stellar object depending on interpretation. For pulsars, such pulsar planets can be detected with the pulsar timing method, which allows for high precision and detection of much smaller planets than with other methods. Two systems have been definitively confirmed. The first exoplanets ever to be detected were the three planets Draugr, Poltergeist and Phobetor around the pulsar Lich, discovered in 1992–1994. Of these, Draugr is the smallest exoplanet ever detected, at a mass of twice that of the Moon. Another system is PSR B1620−26, where a circumbinary planet orbits a neutron star-white dwarf binary system. Also, there are several unconfirmed candidates. Pulsar planets receive little visible light, but massive amounts of ionizing radiation and high-energy stellar wind, which makes them rather hostile environments to life as presently understood. History of discoveries At the meeting of the American Physical Society in December 1933 (the proceedings were published in January 1934), Walter Baade and Fritz Zwicky proposed the existence of neutron stars, less than two years after the discovery of the neutron by James Chadwick. In seeking an explanation for the origin of a supernova, they tentatively proposed that in supernova explosions ordinary stars are turned into stars that consist of extremely closely packed neutrons that they called neutron stars. Baade and Zwicky correctly proposed at that time that the release of the gravitational binding energy of the neutron stars powers the supernova: "In the supernova process, mass in bulk is annihilated". Neutron stars were thought to be too faint to be detectable and little work was done on them until November 1967, when Franco Pacini pointed out that if the neutron stars were spinning and had large magnetic fields, then electromagnetic waves would be emitted. Unknown to him, radio astronomer Antony Hewish and his graduate student Jocelyn Bell at Cambridge were shortly to detect radio pulses from stars that are now believed to be highly magnetized, rapidly spinning neutron stars, known as pulsars. In 1965, Antony Hewish and Samuel Okoye discovered "an unusual source of high radio brightness temperature in the Crab Nebula". This source turned out to be the Crab Pulsar that resulted from the great supernova of 1054. In 1967, Iosif Shklovsky examined the X-ray and optical observations of Scorpius X-1 and correctly concluded that the radiation comes from a neutron star at the stage of accretion. In 1967, Jocelyn Bell Burnell and Antony Hewish discovered regular radio pulses from PSR B1919+21. This pulsar was later interpreted as an isolated, rotating neutron star. The energy source of the pulsar is the rotational energy of the neutron star. The majority of known neutron stars (about 2000, as of 2010) have been discovered as pulsars, emitting regular radio pulses. In 1968, Richard V. E. Lovelace and collaborators discovered period ms of the Crab pulsar using Arecibo Observatory. After this discovery, scientists concluded that pulsars were rotating neutron stars. Before that, many scientists believed that pulsars were pulsating white dwarfs. In 1971, Riccardo Giacconi, Herbert Gursky, Ed Kellogg, R. Levinson, E. Schreier, and H. Tananbaum discovered 4.8 second pulsations in an X-ray source in the constellation Centaurus, Cen X-3. They interpreted this as resulting from a rotating hot neutron star. The energy source is gravitational and results from a rain of gas falling onto the surface of the neutron star from a companion star or the interstellar medium. In 1974, Antony Hewish was awarded the Nobel Prize in Physics "for his decisive role in the discovery of pulsars" without Jocelyn Bell who shared in the discovery. In 1974, Joseph Taylor and Russell Hulse discovered the first binary pulsar, PSR B1913+16, which consists of two neutron stars (one seen as a pulsar) orbiting around their center of mass. Albert Einstein's general theory of relativity predicts that massive objects in short binary orbits should emit gravitational waves, and thus that their orbit should decay with time. This was indeed observed, precisely as general relativity predicts, and in 1993, Taylor and Hulse were awarded the Nobel Prize in Physics for this discovery. In 1982, Don Backer and colleagues discovered the first millisecond pulsar, PSR B1937+21. This object spins 642 times per second, a value that placed fundamental constraints on the mass and radius of neutron stars. Many millisecond pulsars were later discovered, but PSR B1937+21 remained the fastest-spinning known pulsar for 24 years, until PSR J1748-2446ad (which spins ~716 times a second) was discovered. In 2003, Marta Burgay and colleagues discovered the first double neutron star system where both components are detectable as pulsars, PSR J0737−3039. The discovery of this system allows a total of 5 different tests of general relativity, some of these with unprecedented precision. In 2010, Paul Demorest and colleagues measured the mass of the millisecond pulsar PSR J1614−2230 to be , using Shapiro delay. This was substantially higher than any previously measured neutron star mass (, see PSR J1903+0327), and places strong constraints on the interior composition of neutron stars. In 2013, John Antoniadis and colleagues measured the mass of PSR J0348+0432 to be , using white dwarf spectroscopy. This confirmed the existence of such massive stars using a different method. Furthermore, this allowed, for the first time, a test of general relativity using such a massive neutron star. In August 2017, LIGO and Virgo made first detection of gravitational waves produced by colliding neutron stars (GW170817), leading to further discoveries about neutron stars. In October 2018, astronomers reported that GRB 150101B, a gamma-ray burst event detected in 2015, may be directly related to the historic GW170817 and associated with the merger of two neutron stars. The similarities between the two events, in terms of gamma ray, optical and x-ray emissions, as well as to the nature of the associated host galaxies, are "striking", suggesting the two separate events may both be the result of the merger of neutron stars, and both may be a kilonova, which may be more common in the universe than previously understood, according to the researchers. In July 2019, astronomers reported that a new method to determine the Hubble constant, and resolve the discrepancy of earlier methods, has been proposed based on the mergers of pairs of neutron stars, following the detection of the neutron star merger of GW170817. Their measurement of the Hubble constant is (km/s)/Mpc. A 2020 study by University of Southampton PhD student Fabian Gittins suggested that surface irregularities ("mountains") may only be fractions of a millimeter tall (about 0.000003% of the neutron star's diameter), hundreds of times smaller than previously predicted, a result bearing implications for the non-detection of gravitational waves from spinning neutron stars. Using the JWST, astronomers have identified a neutron star within the remnants of the Supernova 1987A stellar explosion after seeking to do so for 37 years, according to a 23 February 2024 Science article. In a paradigm shift, new JWST data provides the elusive direct confirmation of neutron stars within supernova remnants as well as a deeper understanding of the processes at play within SN 1987A's remnants. Subtypes There are a number of types of object that consist of or contain a neutron star: Isolated neutron star (INS): not in a binary system. Rotation-powered pulsar (RPP or "radio pulsar"): neutron stars that emit directed pulses of radiation towards us at regular intervals (due to their strong magnetic fields). Rotating radio transient (RRATs): are thought to be pulsars which emit more sporadically and/or with higher pulse-to-pulse variability than the bulk of the known pulsars. Magnetar: a neutron star with an extremely strong magnetic field (1000 times more than a regular neutron star), and long rotation periods (5 to 12 seconds). Soft gamma repeater (SGR). Anomalous X-ray pulsar (AXP). Radio-quiet neutron stars. X-ray dim isolated neutron stars. Central compact objects in supernova remnants (CCOs in SNRs): young, radio-quiet non-pulsating X-ray sources, thought to be Isolated Neutron Stars surrounded by supernova remnants. X-ray pulsars or "accretion-powered pulsars": a class of X-ray binaries. Low-mass X-ray binary pulsars: a class of low-mass X-ray binaries (LMXB), a pulsar with a main sequence star, white dwarf or red giant. Millisecond pulsar (MSP) ("recycled pulsar"). "Spider Pulsar", a pulsar where their companion is a semi-degenerate star. "Black Widow" pulsar, a pulsar that falls under the "Spider Pulsar" if the companion has extremely low mass (less than ). "Redback" pulsar, are if the companion is more massive. Sub-millisecond pulsar. X-ray burster: a neutron star with a low mass binary companion from which matter is accreted resulting in irregular bursts of energy from the surface of the neutron star. Intermediate-mass X-ray binary pulsars: a class of intermediate-mass X-ray binaries (IMXB), a pulsar with an intermediate mass star. High-mass X-ray binary pulsars: a class of high-mass X-ray binaries (HMXB), a pulsar with a massive star. Binary pulsars: a pulsar with a binary companion, often a white dwarf or neutron star. X-ray tertiary (theorized). There are also a number of theorized compact stars with similar properties that are not actually neutron stars. Protoneutron star (PNS), a theorized intermediate–stage object that cools and contracts to form a neutron star or a black hole Exotic star Thorne–Żytkow object: currently a hypothetical merger of a neutron star into a red giant star. Quark star: currently a hypothetical type of neutron star composed of quark matter, or strange matter. As of 2018, there are three candidates. Electroweak star: currently a hypothetical type of extremely heavy neutron star, in which the quarks are converted to leptons through the electroweak force, but the gravitational collapse of the neutron star is prevented by radiation pressure. As of 2018, there is no evidence for their existence. Preon star: currently a hypothetical type of neutron star composed of preon matter. As of 2018, there is no evidence for the existence of preons. Examples of neutron stars Black Widow Pulsar – a millisecond pulsar that is very massive PSR J0952-0607 – the heaviest neutron star with , a type of Black Widow Pulsar LGM-1 (now known as PSR B1919+21) – the first recognized radio-pulsar. It was discovered by Jocelyn Bell Burnell in 1967. PSR B1257+12 (Also known as Lich) – the first neutron star discovered with planets (a millisecond pulsar). PSR B1509−58 – source of the "Hand of God" photo shot by the Chandra X-ray Observatory RX J1856.5−3754 – the closest neutron star The Magnificent Seven – a group of nearby, X-ray dim isolated neutron stars PSR J0348+0432 – the most massive neutron star with a well-constrained mass, SWIFT J1756.9-2508 – a millisecond pulsar with a stellar-type companion with planetary range mass (below brown dwarf) Swift J1818.0-1607 – the youngest-known magnetar Gallery
Physical sciences
Stellar astronomy
null
21918
https://en.wikipedia.org/wiki/Normal%20subgroup
Normal subgroup
In abstract algebra, a normal subgroup (also known as an invariant subgroup or self-conjugate subgroup) is a subgroup that is invariant under conjugation by members of the group of which it is a part. In other words, a subgroup of the group is normal in if and only if for all and . The usual notation for this relation is . Normal subgroups are important because they (and only they) can be used to construct quotient groups of the given group. Furthermore, the normal subgroups of are precisely the kernels of group homomorphisms with domain , which means that they can be used to internally classify those homomorphisms. Évariste Galois was the first to realize the importance of the existence of normal subgroups. Definitions A subgroup of a group is called a normal subgroup of if it is invariant under conjugation; that is, the conjugation of an element of by an element of is always in . The usual notation for this relation is . Equivalent conditions For any subgroup of , the following conditions are equivalent to being a normal subgroup of . Therefore, any one of them may be taken as the definition. The image of conjugation of by any element of is a subset of , i.e., for all . The image of conjugation of by any element of is equal to i.e., for all . For all , the left and right cosets and are equal. The sets of left and right cosets of in coincide. Multiplication in preserves the equivalence relation "is in the same left coset as". That is, for every satisfying and , we have . There exists a group on the set of left cosets of where multiplication of any two left cosets and yields the left coset (this group is called the quotient group of modulo , denoted ). is a union of conjugacy classes of . is preserved by the inner automorphisms of . There is some group homomorphism whose kernel is . There exists a group homomorphism whose fibers form a group where the identity element is and multiplication of any two fibers and yields the fiber (this group is the same group mentioned above). There is some congruence relation on for which the equivalence class of the identity element is . For all and . the commutator is in . Any two elements commute modulo the normal subgroup membership relation. That is, for all , if and only if . Examples For any group , the trivial subgroup consisting of only the identity element of is always a normal subgroup of . Likewise, itself is always a normal subgroup of (if these are the only normal subgroups, then is said to be simple). Other named normal subgroups of an arbitrary group include the center of the group (the set of elements that commute with all other elements) and the commutator subgroup . More generally, since conjugation is an isomorphism, any characteristic subgroup is a normal subgroup. If is an abelian group then every subgroup of is normal, because . More generally, for any group , every subgroup of the center of is normal in (in the special case that is abelian, the center is all of , hence the fact that all subgroups of an abelian group are normal). A group that is not abelian but for which every subgroup is normal is called a Hamiltonian group. A concrete example of a normal subgroup is the subgroup of the symmetric group , consisting of the identity and both three-cycles. In particular, one can check that every coset of is either equal to itself or is equal to . On the other hand, the subgroup is not normal in since . This illustrates the general fact that any subgroup of index two is normal. As an example of a normal subgroup within a matrix group, consider the general linear group of all invertible matrices with real entries under the operation of matrix multiplication and its subgroup of all matrices of determinant 1 (the special linear group). To see why the subgroup is normal in , consider any matrix in and any invertible matrix . Then using the two important identities and , one has that , and so as well. This means is closed under conjugation in , so it is a normal subgroup. In the Rubik's Cube group, the subgroups consisting of operations which only affect the orientations of either the corner pieces or the edge pieces are normal. The translation group is a normal subgroup of the Euclidean group in any dimension. This means: applying a rigid transformation, followed by a translation and then the inverse rigid transformation, has the same effect as a single translation. By contrast, the subgroup of all rotations about the origin is not a normal subgroup of the Euclidean group, as long as the dimension is at least 2: first translating, then rotating about the origin, and then translating back will typically not fix the origin and will therefore not have the same effect as a single rotation about the origin. Properties If is a normal subgroup of , and is a subgroup of containing , then is a normal subgroup of . A normal subgroup of a normal subgroup of a group need not be normal in the group. That is, normality is not a transitive relation. The smallest group exhibiting this phenomenon is the dihedral group of order 8. However, a characteristic subgroup of a normal subgroup is normal. A group in which normality is transitive is called a T-group. The two groups and are normal subgroups of their direct product . If the group is a semidirect product , then is normal in , though need not be normal in . If and are normal subgroups of an additive group such that and , then . Normality is preserved under surjective homomorphisms; that is, if is a surjective group homomorphism and is normal in , then the image is normal in . Normality is preserved by taking inverse images; that is, if is a group homomorphism and is normal in , then the inverse image is normal in . Normality is preserved on taking direct products; that is, if and , then . Every subgroup of index 2 is normal. More generally, a subgroup, , of finite index, , in contains a subgroup, normal in and of index dividing called the normal core. In particular, if is the smallest prime dividing the order of , then every subgroup of index is normal. The fact that normal subgroups of are precisely the kernels of group homomorphisms defined on accounts for some of the importance of normal subgroups; they are a way to internally classify all homomorphisms defined on a group. For example, a non-identity finite group is simple if and only if it is isomorphic to all of its non-identity homomorphic images, a finite group is perfect if and only if it has no normal subgroups of prime index, and a group is imperfect if and only if the derived subgroup is not supplemented by any proper normal subgroup. Lattice of normal subgroups Given two normal subgroups, and , of , their intersection and their product are also normal subgroups of . The normal subgroups of form a lattice under subset inclusion with least element, , and greatest element, . The meet of two normal subgroups, and , in this lattice is their intersection and the join is their product. The lattice is complete and modular. Normal subgroups, quotient groups and homomorphisms If is a normal subgroup, we can define a multiplication on cosets as follows: This relation defines a mapping . To show that this mapping is well-defined, one needs to prove that the choice of representative elements does not affect the result. To this end, consider some other representative elements . Then there are such that . It follows that where we also used the fact that is a subgroup, and therefore there is such that . This proves that this product is a well-defined mapping between cosets. With this operation, the set of cosets is itself a group, called the quotient group and denoted with There is a natural homomorphism, , given by . This homomorphism maps into the identity element of , which is the coset , that is, . In general, a group homomorphism, sends subgroups of to subgroups of . Also, the preimage of any subgroup of is a subgroup of . We call the preimage of the trivial group in the kernel of the homomorphism and denote it by . As it turns out, the kernel is always normal and the image of , is always isomorphic to (the first isomorphism theorem). In fact, this correspondence is a bijection between the set of all quotient groups of , , and the set of all homomorphic images of (up to isomorphism). It is also easy to see that the kernel of the quotient map, , is itself, so the normal subgroups are precisely the kernels of homomorphisms with domain .
Mathematics
Abstract algebra
null
21920
https://en.wikipedia.org/wiki/Napalm
Napalm
Napalm is an incendiary mixture of a gelling agent and a volatile petrochemical (usually gasoline or diesel fuel). The name is a portmanteau of two of the constituents of the original thickening and gelling agents: coprecipitated aluminium salts of naphthenic acid and palmitic acid. A team led by chemist Louis Fieser originally developed napalm for the US Chemical Warfare Service in 1942 in a secret laboratory at Harvard University. Of immediate first interest was its viability as an incendiary device to be used in American fire bombing campaigns during World War II; its potential to be coherently projected into a solid stream that would carry for distance (instead of the bloomy fireball of pure gasoline) resulted in widespread adoption in infantry and tank/boat mounted flamethrowers as well. Napalm burns at temperatures ranging from . It burns longer than gasoline, is more easily dispersed, and adheres to its targets. These traits make it both effective and controversial. It has been widely used from the air and from the ground, the largest use having been via airdropped bombs in World War II in the incendiary attacks on Japanese cities in 1945. It was used also for close air support roles by the U.S military in the Korean War, the Vietnam War, and various others. Napalm has also fueled most of the flamethrowers (tank-, ship-, and infantry-based) used since World War II, giving them much greater range. Development The development of napalm was precipitated by the use of jellied gasoline mixtures by the Allied forces during World War II. Latex, used in these early forms of incendiary devices, became scarce, since natural rubber was almost impossible to obtain after the Japanese army captured the rubber plantations in Malaya, Indonesia, Vietnam, and Thailand. This shortage of natural rubber prompted chemists at US companies such as DuPont and Standard Oil of New Jersey, and researchers at Harvard University, to develop factory-made alternatives: artificial rubber for all uses, including vehicle tires, tank tracks, gaskets, hoses, medical supplies and rain clothing. A team of chemists led by Louis Fieser at Harvard University was the first to develop synthetic napalm during 1942. "The production of napalm was first entrusted to Nuodex Products, and by the middle of April 1942 they had developed a brown, dry powder that was not sticky by itself, but when mixed with gasoline turned into an extremely sticky and flammable substance." One of Fieser's colleagues suggested adding phosphorus to the mix which increased the "ability to penetrate deeply [...] into the musculature, where it would continue to burn day after day." On 4 July 1942, the first test occurred on the football field near the Harvard Business School. Tests under operational conditions were carried out at Jefferson Proving Ground on condemned farm buildings and subsequently at Dugway Proving Ground on buildings designed and constructed to represent those to be found in German and Japanese towns. This new mixture of chemicals was first approved for use on the front lines in 1943. Military use World War II The first use of napalm in combat was in August 1943 during the Allied invasion of Sicily, when American troops, using napalm-fueled flamethrowers, burned down a wheat field where German forces were believed to be hiding. Napalm incendiary bombs were first used the following year, although the exact date and battle are disputed. Two-thirds of napalm bombs produced during WWII were used in the Pacific War. Napalm was often deployed against Japanese fortifications on Saipan, Iwo Jima, the Philippines, and Okinawa, where deeply dug-in Japanese troops refused to surrender. Following a shortage of conventional thermite bombs, General Curtis LeMay, among other high-ranking servicemen, ordered air raids on Japan to start using napalm instead. A 1946 report by the National Defense Research Council claims that 40,000 tons of M69s were dropped on Japan throughout the war, damaging 64 cities and causing more deaths than the atomic bombings of Hiroshima and Nagasaki. German fortifications and transportation hubs were targeted with napalm during both Operation Overlord and the Battle of the Bulge, sometimes in conjunction with artillery. During the Allied siege of La Rochelle, napalm was dropped on the outskirts of the Royan pocket, inadvertently killing French civilians. The Royal Air Force (RAF) used napalm to a limited extent in both the Pacific War and the European Theater. Korean War Napalm was widely used by the US during the Korean War. The ground forces in North Korea holding defensive positions were often outnumbered by Chinese and North Koreans, but US Air Force and Navy aviators had control of the air over nearly all of the Korean Peninsula. Hence, the American and other UN aviators used napalm for close air support of the ground troops. Napalm was used most notably at the beginning of the Battle of Outpost Harry. Eighth Army chemical officer Donald Bode reported that, on an "average good day", UN pilots used (70,000 US gal; ) of napalm, with approximately (60,000 US gal; ) of this thrown by US forces. The New York Herald Tribune hailed "Napalm, the No. 1 Weapon in Korea". British Prime Minister Winston Churchill privately criticized the use of napalm in Korea, writing that it was "very cruel", as US/UN forces, he wrote, were "splashing it all over the civilian population", "tortur[ing] great masses of people". He conveyed these sentiments to U.S. Chairman of the Joint Chiefs of Staff Omar Bradley, who "never published the statement". Publicly, Churchill allowed Bradley "to issue a statement that confirmed U.K. support for U.S. napalm attacks". Vietnam War Napalm became an intrinsic element of US military action during the Vietnam War as forces made increasing use of it for its tactical and psychological effects. Reportedly about (388,000 short tons; ) of US napalm bombs were dropped in the region between 1963 and 1973. The US Air Force and US Navy used napalm with great effect against all kinds of targets, such as troops, tanks, buildings, jungles, and even railroad tunnels. The effect was not always purely physical as its destructive effects and ability to spread uncontrolled had psychological effects on Vietnamese forces and civilians as well. Others During the Greek Civil War, after the capture of Mount Vitsi during Operation Pyrsos, the Hellenic Air Force bombed Mount Grammos—a stronghold for the opposing Democratic Army of Greece—with US-supplied napalm. The French Air Force regularly used napalm for close air support of ground operations in both the First Indochina War and the Algerian War. At first, the canisters were simply pushed out the cargo doors of transport planes, such as the Amiot AAC.1; later mostly B-26 bombers were used. Peruvian forces employed napalm throughout the 1960s against both communist insurgents and the Matsés indigenous group; four prominent Matsés villages were bombed during the 1964 Matsés massacres. From 1968–1978, Rhodesia produced a variant of napalm for use in the Rhodesian Bush War, nicknamed Frantan (short for "frangible tank"). Around the same time, its ally South Africa targeted guerrilla bases in Angola with napalm during the South African Border War. In 2018, Turkey was accused of using napalm in Operation Olive Branch against Kurdish nationalist groups. Antipersonnel effects When used as a part of an incendiary weapon, napalm causes severe burns. During combustion, napalm deoxygenates the available air and generates carbon monoxide and carbon dioxide, so asphyxiation, unconsciousness, and death are also possible.Napalm is lethal even for dug-in enemy personnel, as it flows into foxholes, tunnels, and bunkers, and drainage and irrigation ditches and other improvised troop shelters. Even people in undamaged shelters can be killed by hyperthermia, radiant heat, dehydration, asphyxiation, smoke exposure, or carbon monoxide poisoning. Crews of armored fighting vehicles are also vulnerable, due to the intense heat conducted through the armor. Even in the case of a near miss, the heat can be enough to disable a vehicle. One firebomb released from a low-flying plane can damage an area of . International law International law does not specifically prohibit the use of napalm or other incendiaries against military targets, but use against civilian populations was banned under Protocol III of the United Nations Convention on Certain Conventional Weapons in 1980, which entered into force as international law in December 1983. As of January 2023, 126 countries have ratified Protocol III.
Technology
Incendiary weapons
null
21935
https://en.wikipedia.org/wiki/Nondeterministic%20Turing%20machine
Nondeterministic Turing machine
In theoretical computer science, a nondeterministic Turing machine (NTM) is a theoretical model of computation whose governing rules specify more than one possible action when in some given situations. That is, an NTM's next state is not completely determined by its action and the current symbol it sees, unlike a deterministic Turing machine. NTMs are sometimes used in thought experiments to examine the abilities and limits of computers. One of the most important open problems in theoretical computer science is the P versus NP problem, which (among other equivalent formulations) concerns the question of how difficult it is to simulate nondeterministic computation with a deterministic computer. Background In essence, a Turing machine is imagined to be a simple computer that reads and writes symbols one at a time on an endless tape by strictly following a set of rules. It determines what action it should perform next according to its internal state and what symbol it currently sees. An example of one of a Turing Machine's rules might thus be: "If you are in state 2 and you see an 'A', then change it to 'B', move left, and switch to state 3." Deterministic Turing machine In a deterministic Turing machine (DTM), the set of rules prescribes at most one action to be performed for any given situation. A deterministic Turing machine has a transition function that, for a given state and symbol under the tape head, specifies three things: the symbol to be written to the tape (it may be the same as the symbol currently in that position, or not even write at all, resulting in no practical change), the direction (left, right or neither) in which the head should move, and the subsequent state of the finite control. For example, an X on the tape in state 3 might make the DTM write a Y on the tape, move the head one position to the right, and switch to state 5. Intuition In contrast to a deterministic Turing machine, in a nondeterministic Turing machine (NTM) the set of rules may prescribe more than one action to be performed for any given situation. For example, an X on the tape in state 3 might allow the NTM to: Write a Y, move right, and switch to state 5 or Write an X, move left, and stay in state 3. Resolution of multiple rules How does the NTM "know" which of these actions it should take? There are two ways of looking at it. One is to say that the machine is the "luckiest possible guesser"; it always picks a transition that eventually leads to an accepting state, if there is such a transition. The other is to imagine that the machine "branches" into many copies, each of which follows one of the possible transitions. Whereas a DTM has a single "computation path" that it follows, an NTM has a "computation tree". If at least one branch of the tree halts with an "accept" condition, the NTM accepts the input. Definition A nondeterministic Turing machine can be formally defined as a six-tuple , where is a finite set of states is a finite set of symbols (the tape alphabet) is the initial state is the blank symbol is the set of accepting (final) states is a relation on states and symbols called the transition relation. is the movement to the left, is no movement, and is the movement to the right. The difference with a standard (deterministic) Turing machine is that, for deterministic Turing machines, the transition relation is a function rather than just a relation. Configurations and the yields relation on configurations, which describes the possible actions of the Turing machine given any possible contents of the tape, are as for standard Turing machines, except that the yields relation is no longer single-valued. (If the machine is deterministic, the possible computations are all prefixes of a single, possibly infinite, path.) The input for an NTM is provided in the same manner as for a deterministic Turing machine: the machine is started in the configuration in which the tape head is on the first character of the string (if any), and the tape is all blank otherwise. An NTM accepts an input string if and only if at least one of the possible computational paths starting from that string puts the machine into an accepting state. When simulating the many branching paths of an NTM on a deterministic machine, we can stop the entire simulation as soon as any branch reaches an accepting state. Alternative definitions As a mathematical construction used primarily in proofs, there are a variety of minor variations on the definition of an NTM, but these variations all accept equivalent languages. The head movement in the output of the transition relation is often encoded numerically instead of using letters to represent moving the head Left (-1), Stationary (0), and Right (+1); giving a transition function output of . It is common to omit the stationary (0) output, and instead insert the transitive closure of any desired stationary transitions. Some authors add an explicit reject state, which causes the NTM to halt without accepting. This definition still retains the asymmetry that any nondeterministic branch can accept, but every branch must reject for the string to be rejected. Computational equivalence with DTMs Any computational problem that can be solved by a DTM can also be solved by a NTM, and vice versa. However, it is believed that in general the time complexity may not be the same. DTM as a special case of NTM NTMs include DTMs as special cases, so every computation that can be carried out by a DTM can also be carried out by the equivalent NTM. DTM simulation of NTM It might seem that NTMs are more powerful than DTMs, since they can allow trees of possible computations arising from the same initial configuration, accepting a string if any one branch in the tree accepts it. However, it is possible to simulate NTMs with DTMs, and in fact this can be done in more than one way. Multiplicity of configuration states One approach is to use a DTM of which the configurations represent multiple configurations of the NTM, and the DTM's operation consists of visiting each of them in turn, executing a single step at each visit, and spawning new configurations whenever the transition relation defines multiple continuations. Multiplicity of tapes Another construction simulates NTMs with 3-tape DTMs, of which the first tape always holds the original input string, the second is used to simulate a particular computation of the NTM, and the third encodes a path in the NTM's computation tree. The 3-tape DTMs are easily simulated with a normal single-tape DTM. Time complexity and P versus NP In the second construction, the constructed DTM effectively performs a breadth-first search of the NTM's computation tree, visiting all possible computations of the NTM in order of increasing length until it finds an accepting one. Therefore, the length of an accepting computation of the DTM is, in general, exponential in the length of the shortest accepting computation of the NTM. This is believed to be a general property of simulations of NTMs by DTMs. The P = NP problem, the most famous unresolved question in computer science, concerns one case of this issue: whether or not every problem solvable by a NTM in polynomial time is necessarily also solvable by a DTM in polynomial time. Bounded nondeterminism An NTM has the property of bounded nondeterminism. That is, if an NTM always halts on a given input tape T then it halts in a bounded number of steps, and therefore can only have a bounded number of possible configurations. Comparison with quantum computers Because quantum computers use quantum bits, which can be in superpositions of states, rather than conventional bits, there is sometimes a misconception that quantum computers are NTMs. However, it is believed by experts (but has not been proven) that the power of quantum computers is, in fact, incomparable to that of NTMs; that is, problems likely exist that an NTM could efficiently solve that a quantum computer cannot and vice versa. In particular, it is likely that NP-complete problems are solvable by NTMs but not by quantum computers in polynomial time. Intuitively speaking, while a quantum computer can indeed be in a superposition state corresponding to all possible computational branches having been executed at the same time (similar to an NTM), the final measurement will collapse the quantum computer into a randomly selected branch. This branch then does not, in general, represent the sought-for solution, unlike the NTM, which is allowed to pick the right solution among the exponentially many branches.
Mathematics
Theoretical computer science
null
21938
https://en.wikipedia.org/wiki/Neoproterozoic
Neoproterozoic
The Neoproterozoic Era is the last of the three geologic eras of the Proterozoic eon, spanning from 1 billion to 538.8 million years ago, and is the last era of the Precambrian "supereon". It is preceded by the Mesoproterozoic era and succeeded by the Paleozoic era of the Phanerozoic eon, and is further subdivided into three periods, the Tonian, Cryogenian and Ediacaran. One of the most severe glaciation event known in the geologic record occurred during the Cryogenian period of the Neoproterozoic, when global ice sheets may have reached the equator and created a "Snowball Earth" lasting about 100 million years. The earliest fossils of complex life are found in the Tonian period in the form of Otavia, a primitive sponge, and the earliest fossil evidence of metazoan radiation are found in the Ediacaran period, which included the namesaked Ediacaran biota as well as the oldest definitive cnidarians and bilaterians in the fossil record. According to Rino and co-workers, the sum of the continental crust formed in the Pan-African orogeny and the Grenville orogeny makes the Neoproterozoic the period of Earth's history that has produced most continental crust. Geology At the onset of the Neoproterozoic the supercontinent Rodinia, which had assembled during the late Mesoproterozoic, straddled the equator. During the Tonian, rifting commenced which broke Rodinia into a number of individual land masses. Possibly as a consequence of the low-latitude position of most continents, several large-scale glacial events occurred during the Neoproterozoic Era including the Sturtian and Marinoan glaciations of the Cryogenian Period. These glaciations are believed to have been so severe that there were ice sheets at the equator—a state known as the "Snowball Earth". Subdivisions Neoproterozoic time is subdivided into the Tonian (1000–720 Ma), Cryogenian (720–635 Ma) and Ediacaran (635–538.8 Ma) periods. Russian regional timescale In the regional timescale of Russia, the Tonian and Cryogenian correspond to the Late Riphean; the Ediacaran corresponds to the Early to middle Vendian. Russian geologists divide the Neoproterozoic of Siberia into the Mayanian (from 1000 to 850 Ma) followed by the Baikalian (from 850 to 650 Ma). Paleobiology The idea of the Neoproterozoic Era was introduced in the 1960s. Nineteenth-century paleontologists set the start of multicellular life at the first appearance of hard-shelled arthropods called trilobites and archeocyathid sponges at the beginning of the Cambrian Period. In the early 20th century, paleontologists started finding fossils of multicellular animals that predated the Cambrian. A complex fauna was found in South West Africa in the 1920s but was inaccurately dated. Another fauna was found in South Australia in the 1940s, but it was not thoroughly examined until the late 1950s. Other possible early animal fossils were found in Russia, England, Canada, and elsewhere (see Ediacaran biota). Some were determined to be pseudofossils, but others were revealed to be members of rather complex biotas that remain poorly understood. At least 25 regions worldwide have yielded metazoan fossils older than the classical Precambrian–Cambrian boundary (which is currently dated at ). A few of the early animals appear possibly to be ancestors of modern animals. Most fall into ambiguous groups of frond-like organisms; discoids that might be holdfasts for stalked organisms ("medusoids"); mattress-like forms; small calcareous tubes; and armored animals of unknown provenance. These were most commonly known as Vendian biota until the formal naming of the Period, and are currently known as Ediacaran Period biota. Most were soft bodied. The relationships, if any, to modern forms are obscure. Some paleontologists relate many or most of these forms to modern animals. Others acknowledge a few possible or even likely relationships but feel that most of the Ediacaran forms are representatives of unknown animal types. In addition to Ediacaran biota, two other types of biota were discovered in China. The Doushantuo Formation (of Ediacaran age) preserves fossils of microscopic marine organisms in great detail. The Huainan biota (of late Tonian age) consists of small worm-shaped organisms. Molecular phylogeny suggests that animals may have emerged even earlier in the Neoproterozoic (early Tonian), but physical evidence for such animal life is lacking. Possible keratose sponge fossils have been reported in reefs dated to 890 million years before the present, but remain unconfirmed. Terminal period The nomenclature for the terminal period of the Neoproterozoic Era has been unstable. Russian and Nordic geologists referred to the last period of the Neoproterozoic as the Vendian, while Chinese geologists referred to it as the Sinian, and most Australians and North Americans used the name Ediacaran. However, in 2004, the International Union of Geological Sciences ratified the Ediacaran Period to be a geological age of the Neoproterozoic, ranging from to (at the time to 542) million years ago. The Ediacaran Period boundaries are the only Precambrian boundaries defined by biologic Global Boundary Stratotype Section and Points, rather than the absolute Global Standard Stratigraphic Ages.
Physical sciences
Geological timescale
Earth science
21944
https://en.wikipedia.org/wiki/Nervous%20system
Nervous system
In biology, the nervous system is the highly complex part of an animal that coordinates its actions and sensory information by transmitting signals to and from different parts of its body. The nervous system detects environmental changes that impact the body, then works in tandem with the endocrine system to respond to such events. Nervous tissue first arose in wormlike organisms about 550 to 600 million years ago. In vertebrates, it consists of two main parts, the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS consists of the brain and spinal cord. The PNS consists mainly of nerves, which are enclosed bundles of the long fibers, or axons, that connect the CNS to every other part of the body. Nerves that transmit signals from the brain are called motor nerves (efferent), while those nerves that transmit information from the body to the CNS are called sensory nerves (afferent). The PNS is divided into two separate subsystems, the somatic and autonomic, nervous systems. The autonomic nervous system is further subdivided into the sympathetic, parasympathetic and enteric nervous systems. The sympathetic nervous system is activated in cases of emergencies to mobilize energy, while the parasympathetic nervous system is activated when organisms are in a relaxed state. The enteric nervous system functions to control the gastrointestinal system. Nerves that exit from the brain are called cranial nerves while those exiting from the spinal cord are called spinal nerves. The nervous system consists of nervous tissue which, at a cellular level, is defined by the presence of a special type of cell, called the neuron. Neurons have special structures that allow them to send signals rapidly and precisely to other cells. They send these signals in the form of electrochemical impulses traveling along thin fibers called axons, which can be directly transmitted to neighboring cells through electrical synapses or cause chemicals called neurotransmitters to be released at chemical synapses. A cell that receives a synaptic signal from a neuron may be excited, inhibited, or otherwise modulated. The connections between neurons can form neural pathways, neural circuits, and larger networks that generate an organism's perception of the world and determine its behavior. Along with neurons, the nervous system contains other specialized cells called glial cells (or simply glia), which provide structural and metabolic support. Many of the cells and vasculature channels within the nervous system make up the neurovascular unit, which regulates cerebral blood flow in order to rapidly satisfy the high energy demands of activated neurons. Nervous systems are found in most multicellular animals, but vary greatly in complexity. The only multicellular animals that have no nervous system at all are sponges, placozoans, and mesozoans, which have very simple body plans. The nervous systems of the radially symmetric organisms ctenophores (comb jellies) and cnidarians (which include anemones, hydras, corals and jellyfish) consist of a diffuse nerve net. All other animal species, with the exception of a few types of worm, have a nervous system containing a brain, a central cord (or two cords running in parallel), and nerves radiating from the brain and central cord. The size of the nervous system ranges from a few hundred cells in the simplest worms, to around 300 billion cells in African elephants. The central nervous system functions to send signals from one cell to others, or from one part of the body to others and to receive feedback. Malfunction of the nervous system can occur as a result of genetic defects, physical damage due to trauma or toxicity, infection, or simply senescence. The medical specialty of neurology studies disorders of the nervous system and looks for interventions that can prevent or treat them. In the peripheral nervous system, the most common problem is the failure of nerve conduction, which can be due to different causes including diabetic neuropathy and demyelinating disorders such as multiple sclerosis and amyotrophic lateral sclerosis. Neuroscience is the field of science that focuses on the study of the nervous system. Structure The nervous system derives its name from nerves, which are cylindrical bundles of fibers (the axons of neurons), that emanate from the brain and spinal cord, and branch repeatedly to innervate every part of the body. Nerves are large enough to have been recognized by the ancient Egyptians, Greeks, and Romans, but their internal structure was not understood until it became possible to examine them using a microscope. The author Michael Nikoletseas wrote: "It is difficult to believe that until approximately year 1900 it was not known that neurons are the basic units of the brain (Santiago Ramón y Cajal). Equally surprising is the fact that the concept of chemical transmission in the brain was not known until around 1930 (Henry Hallett Dale and Otto Loewi). We began to understand the basic electrical phenomenon that neurons use in order to communicate among themselves, the action potential, in the 1950s (Alan Lloyd Hodgkin, Andrew Huxley and John Eccles). It was in the 1960s that we became aware of how basic neuronal networks code stimuli and thus basic concepts are possible (David H. Hubel and Torsten Wiesel). The molecular revolution swept across US universities in the 1980s. It was in the 1990s that molecular mechanisms of behavioral phenomena became widely known (Eric Richard Kandel)." A microscopic examination shows that nerves consist primarily of axons, along with different membranes that wrap around them and segregate them into fascicles. The neurons that give rise to nerves do not lie entirely within the nerves themselves—their cell bodies reside within the brain, spinal cord, or peripheral ganglia. All animals more advanced than sponges have nervous systems. However, even sponges, unicellular animals, and non-animals such as slime molds have cell-to-cell signalling mechanisms that are precursors to those of neurons. In radially symmetric animals such as the jellyfish and hydra, the nervous system consists of a nerve net, a diffuse network of isolated cells. In bilaterian animals, which make up the great majority of existing species, the nervous system has a common structure that originated early in the Ediacaran period, over 550 million years ago. Cells The nervous system contains two main categories or types of cells: neurons and glial cells. Neurons The nervous system is defined by the presence of a special type of cell—the neuron (sometimes called "neurone" or "nerve cell"). Neurons can be distinguished from other cells in a number of ways, but their most fundamental property is that they communicate with other cells via synapses, which are membrane-to-membrane junctions containing molecular machinery that allows rapid transmission of signals, either electrical or chemical. Many types of neuron possess an axon, a protoplasmic protrusion that can extend to distant parts of the body and make thousands of synaptic contacts; axons typically extend throughout the body in bundles called nerves. Even in the nervous system of a single species such as humans, hundreds of different types of neurons exist, with a wide variety of morphologies and functions. These include sensory neurons that transmute physical stimuli such as light and sound into neural signals, and motor neurons that transmute neural signals into activation of muscles or glands; however in many species the great majority of neurons participate in the formation of centralized structures (the brain and ganglia) and they receive all of their input from other neurons and send their output to other neurons. Glial cells Glial cells (named from the Greek for "glue") are non-neuronal cells that provide support and nutrition, maintain homeostasis, form myelin, and participate in signal transmission in the nervous system. In the human brain, it is estimated that the total number of glia roughly equals the number of neurons, although the proportions vary in different brain areas. Among the most important functions of glial cells are to support neurons and hold them in place; to supply nutrients to neurons; to insulate neurons electrically; to destroy pathogens and remove dead neurons; and to provide guidance cues directing the axons of neurons to their targets. A very important type of glial cell (oligodendrocytes in the central nervous system, and Schwann cells in the peripheral nervous system) generates layers of a fatty substance called myelin that wraps around axons and provides electrical insulation which allows them to transmit action potentials much more rapidly and efficiently. Recent findings indicate that glial cells, such as microglia and astrocytes, serve as important resident immune cells within the central nervous system. Anatomy in vertebrates The nervous system of vertebrates (including humans) is divided into the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS is the major division, and consists of the brain and the spinal cord. The spinal canal contains the spinal cord, while the cranial cavity contains the brain. The CNS is enclosed and protected by the meninges, a three-layered system of membranes, including a tough, leathery outer layer called the dura mater. The brain is also protected by the skull, and the spinal cord by the vertebrae. The peripheral nervous system (PNS) is a collective term for the nervous system structures that do not lie within the CNS. The large majority of the axon bundles called nerves are considered to belong to the PNS, even when the cell bodies of the neurons to which they belong reside within the brain or spinal cord. The PNS is divided into somatic and visceral parts. The somatic part consists of the nerves that innervate the skin, joints, and muscles. The cell bodies of somatic sensory neurons lie in dorsal root ganglia of the spinal cord. The visceral part, also known as the autonomic nervous system, contains neurons that innervate the internal organs, blood vessels, and glands. The autonomic nervous system itself consists of two parts: the sympathetic nervous system and the parasympathetic nervous system. Some authors also include sensory neurons whose cell bodies lie in the periphery (for senses such as hearing) as part of the PNS; others, however, omit them. The vertebrate nervous system can also be divided into areas called gray matter and white matter. Gray matter (which is only gray in preserved tissue, and is better described as pink or light brown in living tissue) contains a high proportion of cell bodies of neurons. White matter is composed mainly of myelinated axons, and takes its color from the myelin. White matter includes all of the nerves, and much of the interior of the brain and spinal cord. Gray matter is found in clusters of neurons in the brain and spinal cord, and in cortical layers that line their surfaces. There is an anatomical convention that a cluster of neurons in the brain or spinal cord is called a nucleus, whereas a cluster of neurons in the periphery is called a ganglion. There are, however, a few exceptions to this rule, notably including the part of the forebrain called the basal ganglia. Comparative anatomy and evolution Neural precursors in sponges Sponges have no cells connected to each other by synaptic junctions, that is, no neurons, and therefore no nervous system. They do, however, have homologs of many genes that play key roles in synaptic function. Recent studies have shown that sponge cells express a group of proteins that cluster together to form a structure resembling a postsynaptic density (the signal-receiving part of a synapse). However, the function of this structure is currently unclear. Although sponge cells do not show synaptic transmission, they do communicate with each other via calcium waves and other impulses, which mediate some simple actions such as whole-body contraction. Radiata Jellyfish, comb jellies, and related animals have diffuse nerve nets rather than a central nervous system. In most jellyfish the nerve net is spread more or less evenly across the body; in comb jellies it is concentrated near the mouth. The nerve nets consist of sensory neurons, which pick up chemical, tactile, and visual signals; motor neurons, which can activate contractions of the body wall; and intermediate neurons, which detect patterns of activity in the sensory neurons and, in response, send signals to groups of motor neurons. In some cases groups of intermediate neurons are clustered into discrete ganglia. The development of the nervous system in radiata is relatively unstructured. Unlike bilaterians, radiata only have two primordial cell layers, endoderm and ectoderm. Neurons are generated from a special set of ectodermal precursor cells, which also serve as precursors for every other ectodermal cell type. Bilateria The vast majority of existing animals are bilaterians, meaning animals with left and right sides that are approximate mirror images of each other. All bilateria are thought to have descended from a common wormlike ancestor that appear as fossils beginning in the Ediacaran period, 550–600 million years ago. The fundamental bilaterian body form is a tube with a hollow gut cavity running from mouth to anus, and a nerve cord with an enlargement (a "ganglion") for each body segment, with an especially large ganglion at the front, called the "brain". Even mammals, including humans, show the segmented bilaterian body plan at the level of the nervous system. The spinal cord contains a series of segmental ganglia, each giving rise to motor and sensory nerves that innervate a portion of the body surface and underlying musculature. On the limbs, the layout of the innervation pattern is complex, but on the trunk it gives rise to a series of narrow bands. The top three segments belong to the brain, giving rise to the forebrain, midbrain, and hindbrain. Bilaterians can be divided, based on events that occur very early in embryonic development, into two groups (superphyla) called protostomes and deuterostomes. Deuterostomes include vertebrates as well as echinoderms, hemichordates (mainly acorn worms), and Xenoturbellidans. Protostomes, the more diverse group, include arthropods, molluscs, and numerous phyla of "worms". There is a basic difference between the two groups in the placement of the nervous system within the body: protostomes possess a nerve cord on the ventral (usually bottom) side of the body, whereas in deuterostomes the nerve cord is on the dorsal (usually top) side. In fact, numerous aspects of the body are inverted between the two groups, including the expression patterns of several genes that show dorsal-to-ventral gradients. Most anatomists now consider that the bodies of protostomes and deuterostomes are "flipped over" with respect to each other, a hypothesis that was first proposed by Geoffroy Saint-Hilaire for insects in comparison to vertebrates. Thus insects, for example, have nerve cords that run along the ventral midline of the body, while all vertebrates have spinal cords that run along the dorsal midline. Worms Worms are the simplest bilaterian animals, and reveal the basic structure of the bilaterian nervous system in the most straightforward way. As an example, earthworms have dual nerve cords running along the length of the body and merging at the tail and the mouth. These nerve cords are connected by transverse nerves like the rungs of a ladder. These transverse nerves help coordinate the two sides of the animal. Two ganglia at the head (the "nerve ring") end function similar to a simple brain. Photoreceptors on the animal's eyespots provide sensory information on light and dark. The nervous system of one very small roundworm, the nematode Caenorhabditis elegans, has been completely mapped out in a connectome including its synapses. Every neuron and its cellular lineage has been recorded and most, if not all, of the neural connections are known. In this species, the nervous system is sexually dimorphic; the nervous systems of the two sexes, males and female hermaphrodites, have different numbers of neurons and groups of neurons that perform sex-specific functions. In C. elegans, males have exactly 383 neurons, while hermaphrodites have exactly 302 neurons. Arthropods Arthropods, such as insects and crustaceans, have a nervous system made up of a series of ganglia, connected by a ventral nerve cord made up of two parallel connectives running along the length of the belly. Typically, each body segment has one ganglion on each side, though some ganglia are fused to form the brain and other large ganglia. The head segment contains the brain, also known as the supraesophageal ganglion. In the insect nervous system, the brain is anatomically divided into the protocerebrum, deutocerebrum, and tritocerebrum. Immediately behind the brain is the subesophageal ganglion, which is composed of three pairs of fused ganglia. It controls the mouthparts, the salivary glands and certain muscles. Many arthropods have well-developed sensory organs, including compound eyes for vision and antennae for olfaction and pheromone sensation. The sensory information from these organs is processed by the brain. In insects, many neurons have cell bodies that are positioned at the edge of the brain and are electrically passive—the cell bodies serve only to provide metabolic support and do not participate in signalling. A protoplasmic fiber runs from the cell body and branches profusely, with some parts transmitting signals and other parts receiving signals. Thus, most parts of the insect brain have passive cell bodies arranged around the periphery, while the neural signal processing takes place in a tangle of protoplasmic fibers called neuropil, in the interior. Molluscs "Identified" neurons A neuron is called identified if it has properties that distinguish it from every other neuron in the same animal—properties such as location, neurotransmitter, gene expression pattern, and connectivity—and if every individual organism belonging to the same species has one and only one neuron with the same set of properties. In vertebrate nervous systems very few neurons are "identified" in this sense—in humans, there are believed to be none—but in simpler nervous systems, some or all neurons may be thus unique. In the roundworm C. elegans, whose nervous system is the most thoroughly described of any animal's, every neuron in the body is uniquely identifiable, with the same location and the same connections in every individual worm. One notable consequence of this fact is that the form of the C. elegans nervous system is completely specified by the genome, with no experience-dependent plasticity. The brains of many molluscs and insects also contain substantial numbers of identified neurons. In vertebrates, the best known identified neurons are the gigantic Mauthner cells of fish. Every fish has two Mauthner cells, in the bottom part of the brainstem, one on the left side and one on the right. Each Mauthner cell has an axon that crosses over, innervating neurons at the same brain level and then travelling down through the spinal cord, making numerous connections as it goes. The synapses generated by a Mauthner cell are so powerful that a single action potential gives rise to a major behavioral response: within milliseconds the fish curves its body into a C-shape, then straightens, thereby propelling itself rapidly forward. Functionally this is a fast escape response, triggered most easily by a strong sound wave or pressure wave impinging on the lateral line organ of the fish. Mauthner cells are not the only identified neurons in fish—there are about 20 more types, including pairs of "Mauthner cell analogs" in each spinal segmental nucleus. Although a Mauthner cell is capable of bringing about an escape response individually, in the context of ordinary behavior other types of cells usually contribute to shaping the amplitude and direction of the response. Mauthner cells have been described as command neurons. A command neuron is a special type of identified neuron, defined as a neuron that is capable of driving a specific behavior individually. Such neurons appear most commonly in the fast escape systems of various species—the squid giant axon and squid giant synapse, used for pioneering experiments in neurophysiology because of their enormous size, both participate in the fast escape circuit of the squid. The concept of a command neuron has, however, become controversial, because of studies showing that some neurons that initially appeared to fit the description were really only capable of evoking a response in a limited set of circumstances. Function At the most basic level, the function of the nervous system is to send signals from one cell to others, or from one part of the body to others. There are multiple ways that a cell can send signals to other cells. One is by releasing chemicals called hormones into the internal circulation, so that they can diffuse to distant sites. In contrast to this "broadcast" mode of signaling, the nervous system provides "point-to-point" signals—neurons project their axons to specific target areas and make synaptic connections with specific target cells. Thus, neural signaling is capable of a much higher level of specificity than hormonal signaling. It is also much faster: the fastest nerve signals travel at speeds that exceed 100 meters per second. At a more integrative level, the primary function of the nervous system is to control the body. It does this by extracting information from the environment using sensory receptors, sending signals that encode this information into the central nervous system, processing the information to determine an appropriate response, and sending output signals to muscles or glands to activate the response. The evolution of a complex nervous system has made it possible for various animal species to have advanced perception abilities such as vision, complex social interactions, rapid coordination of organ systems, and integrated processing of concurrent signals. In humans, the sophistication of the nervous system makes it possible to have language, abstract representation of concepts, transmission of culture, and many other features of human society that would not exist without the human brain. Neurons and synapses Most neurons send signals via their axons, although some types are capable of dendrite-to-dendrite communication. (In fact, the types of neurons called amacrine cells have no axons, and communicate only via their dendrites.) Neural signals propagate along an axon in the form of electrochemical waves called action potentials, which produce cell-to-cell signals at points where axon terminals make synaptic contact with other cells. Synapses may be electrical or chemical. Electrical synapses make direct electrical connections between neurons, but chemical synapses are much more common, and much more diverse in function. At a chemical synapse, the cell that sends signals is called presynaptic, and the cell that receives signals is called postsynaptic. Both the presynaptic and postsynaptic areas are full of molecular machinery that carries out the signalling process. The presynaptic area contains large numbers of tiny spherical vessels called synaptic vesicles, packed with neurotransmitter chemicals. When the presynaptic terminal is electrically stimulated, an array of molecules embedded in the membrane are activated, and cause the contents of the vesicles to be released into the narrow space between the presynaptic and postsynaptic membranes, called the synaptic cleft. The neurotransmitter then binds to receptors embedded in the postsynaptic membrane, causing them to enter an activated state. Depending on the type of receptor, the resulting effect on the postsynaptic cell may be excitatory, inhibitory, or modulatory in more complex ways. For example, release of the neurotransmitter acetylcholine at a synaptic contact between a motor neuron and a muscle cell induces rapid contraction of the muscle cell. The entire synaptic transmission process takes only a fraction of a millisecond, although the effects on the postsynaptic cell may last much longer (even indefinitely, in cases where the synaptic signal leads to the formation of a memory trace). There are literally hundreds of different types of synapses. In fact, there are over a hundred known neurotransmitters, and many of them have multiple types of receptors. Many synapses use more than one neurotransmitter—a common arrangement is for a synapse to use one fast-acting small-molecule neurotransmitter such as glutamate or GABA, along with one or more peptide neurotransmitters that play slower-acting modulatory roles. Molecular neuroscientists generally divide receptors into two broad groups: chemically gated ion channels and second messenger systems. When a chemically gated ion channel is activated, it forms a passage that allows specific types of ions to flow across the membrane. Depending on the type of ion, the effect on the target cell may be excitatory or inhibitory. When a second messenger system is activated, it starts a cascade of molecular interactions inside the target cell, which may ultimately produce a wide variety of complex effects, such as increasing or decreasing the sensitivity of the cell to stimuli, or even altering gene transcription. According to a rule called Dale's principle, which has only a few known exceptions, a neuron releases the same neurotransmitters at all of its synapses. This does not mean, though, that a neuron exerts the same effect on all of its targets, because the effect of a synapse depends not on the neurotransmitter, but on the receptors that it activates. Because different targets can (and frequently do) use different types of receptors, it is possible for a neuron to have excitatory effects on one set of target cells, inhibitory effects on others, and complex modulatory effects on others still. Nevertheless, it happens that the two most widely used neurotransmitters, glutamate and GABA, each have largely consistent effects. Glutamate has several widely occurring types of receptors, but all of them are excitatory or modulatory. Similarly, GABA has several widely occurring receptor types, but all of them are inhibitory. Because of this consistency, glutamatergic cells are frequently referred to as "excitatory neurons", and GABAergic cells as "inhibitory neurons". Strictly speaking, this is an abuse of terminology—it is the receptors that are excitatory and inhibitory, not the neurons—but it is commonly seen even in scholarly publications. One very important subset of synapses are capable of forming memory traces by means of long-lasting activity-dependent changes in synaptic strength. The best-known form of neural memory is a process called long-term potentiation (abbreviated LTP), which operates at synapses that use the neurotransmitter glutamate acting on a special type of receptor known as the NMDA receptor. The NMDA receptor has an "associative" property: if the two cells involved in the synapse are both activated at approximately the same time, a channel opens that permits calcium to flow into the target cell. The calcium entry initiates a second messenger cascade that ultimately leads to an increase in the number of glutamate receptors in the target cell, thereby increasing the effective strength of the synapse. This change in strength can last for weeks or longer. Since the discovery of LTP in 1973, many other types of synaptic memory traces have been found, involving increases or decreases in synaptic strength that are induced by varying conditions, and last for variable periods of time. The reward system, that reinforces desired behaviour for example, depends on a variant form of LTP that is conditioned on an extra input coming from a reward-signalling pathway that uses dopamine as neurotransmitter. All these forms of synaptic modifiability, taken collectively, give rise to neural plasticity, that is, to a capability for the nervous system to adapt itself to variations in the environment. Neural circuits and systems The basic neuronal function of sending signals to other cells includes a capability for neurons to exchange signals with each other. Networks formed by interconnected groups of neurons are capable of a wide variety of functions, including feature detection, pattern generation and timing, and there are seen to be countless types of information processing possible. Warren McCulloch and Walter Pitts showed in 1943 that even artificial neural networks formed from a greatly simplified mathematical abstraction of a neuron are capable of universal computation. Historically, for many years the predominant view of the function of the nervous system was as a stimulus-response associator. In this conception, neural processing begins with stimuli that activate sensory neurons, producing signals that propagate through chains of connections in the spinal cord and brain, giving rise eventually to activation of motor neurons and thereby to muscle contraction, i.e., to overt responses. Descartes believed that all of the behaviors of animals, and most of the behaviors of humans, could be explained in terms of stimulus-response circuits, although he also believed that higher cognitive functions such as language were not capable of being explained mechanistically. Charles Sherrington, in his influential 1906 book The Integrative Action of the Nervous System, developed the concept of stimulus-response mechanisms in much more detail, and behaviorism, the school of thought that dominated psychology through the middle of the 20th century, attempted to explain every aspect of human behavior in stimulus-response terms. However, experimental studies of electrophysiology, beginning in the early 20th century and reaching high productivity by the 1940s, showed that the nervous system contains many mechanisms for maintaining cell excitability and generating patterns of activity intrinsically, without requiring an external stimulus. Neurons were found to be capable of producing regular sequences of action potentials, or sequences of bursts, even in complete isolation. When intrinsically active neurons are connected to each other in complex circuits, the possibilities for generating intricate temporal patterns become far more extensive. A modern conception views the function of the nervous system partly in terms of stimulus-response chains, and partly in terms of intrinsically generated activity patterns—both types of activity interact with each other to generate the full repertoire of behavior. Reflexes and other stimulus-response circuits The simplest type of neural circuit is a reflex arc, which begins with a sensory input and ends with a motor output, passing through a sequence of neurons connected in series. This can be shown in the "withdrawal reflex" causing a hand to jerk back after a hot stove is touched. The circuit begins with sensory receptors in the skin that are activated by harmful levels of heat: a special type of molecular structure embedded in the membrane causes heat to change the electrical field across the membrane. If the change in electrical potential is large enough to pass the given threshold, it evokes an action potential, which is transmitted along the axon of the receptor cell, into the spinal cord. There the axon makes excitatory synaptic contacts with other cells, some of which project (send axonal output) to the same region of the spinal cord, others projecting into the brain. One target is a set of spinal interneurons that project to motor neurons controlling the arm muscles. The interneurons excite the motor neurons, and if the excitation is strong enough, some of the motor neurons generate action potentials, which travel down their axons to the point where they make excitatory synaptic contacts with muscle cells. The excitatory signals induce contraction of the muscle cells, which causes the joint angles in the arm to change, pulling the arm away. In reality, this straightforward schema is subject to numerous complications. Although for the simplest reflexes there are short neural paths from sensory neuron to motor neuron, there are also other nearby neurons that participate in the circuit and modulate the response. Furthermore, there are projections from the brain to the spinal cord that are capable of enhancing or inhibiting the reflex. Although the simplest reflexes may be mediated by circuits lying entirely within the spinal cord, more complex responses rely on signal processing in the brain. For example, when an object in the periphery of the visual field moves, and a person looks toward it many stages of signal processing are initiated. The initial sensory response, in the retina of the eye, and the final motor response, in the oculomotor nuclei of the brainstem, are not all that different from those in a simple reflex, but the intermediate stages are completely different. Instead of a one or two step chain of processing, the visual signals pass through perhaps a dozen stages of integration, involving the thalamus, cerebral cortex, basal ganglia, superior colliculus, cerebellum, and several brainstem nuclei. These areas perform signal-processing functions that include feature detection, perceptual analysis, memory recall, decision-making, and motor planning. Feature detection is the ability to extract biologically relevant information from combinations of sensory signals. In the visual system, for example, sensory receptors in the retina of the eye are only individually capable of detecting "points of light" in the outside world. Second-level visual neurons receive input from groups of primary receptors, higher-level neurons receive input from groups of second-level neurons, and so on, forming a hierarchy of processing stages. At each stage, important information is extracted from the signal ensemble and unimportant information is discarded. By the end of the process, input signals representing "points of light" have been transformed into a neural representation of objects in the surrounding world and their properties. The most sophisticated sensory processing occurs inside the brain, but complex feature extraction also takes place in the spinal cord and in peripheral sensory organs such as the retina. Intrinsic pattern generation Although stimulus-response mechanisms are the easiest to understand, the nervous system is also capable of controlling the body in ways that do not require an external stimulus, by means of internally generated rhythms of activity. Because of the variety of voltage-sensitive ion channels that can be embedded in the membrane of a neuron, many types of neurons are capable, even in isolation, of generating rhythmic sequences of action potentials, or rhythmic alternations between high-rate bursting and quiescence. When neurons that are intrinsically rhythmic are connected to each other by excitatory or inhibitory synapses, the resulting networks are capable of a wide variety of dynamical behaviors, including attractor dynamics, periodicity, and even chaos. A network of neurons that uses its internal structure to generate temporally structured output, without requiring a corresponding temporally structured stimulus, is called a central pattern generator. Internal pattern generation operates on a wide range of time scales, from milliseconds to hours or longer. One of the most important types of temporal pattern is circadian rhythmicity—that is, rhythmicity with a period of approximately 24 hours. All animals that have been studied show circadian fluctuations in neural activity, which control circadian alternations in behavior such as the sleep-wake cycle. Experimental studies dating from the 1990s have shown that circadian rhythms are generated by a "genetic clock" consisting of a special set of genes whose expression level rises and falls over the course of the day. Animals as diverse as insects and vertebrates share a similar genetic clock system. The circadian clock is influenced by light but continues to operate even when light levels are held constant and no other external time-of-day cues are available. The clock genes are expressed in many parts of the nervous system as well as many peripheral organs, but in mammals, all of these "tissue clocks" are kept in synchrony by signals that emanate from a master timekeeper in a tiny part of the brain called the suprachiasmatic nucleus. Mirror neurons A mirror neuron is a neuron that fires both when an animal acts and when the animal observes the same action performed by another. Thus, the neuron "mirrors" the behavior of the other, as though the observer were itself acting. Such neurons have been directly observed in primate species. Birds have been shown to have imitative resonance behaviors and neurological evidence suggests the presence of some form of mirroring system. In humans, brain activity consistent with that of mirror neurons has been found in the premotor cortex, the supplementary motor area, the primary somatosensory cortex and the inferior parietal cortex. The function of the mirror system is a subject of much speculation. Many researchers in cognitive neuroscience and cognitive psychology consider that this system provides the physiological mechanism for the perception/action coupling (see the common coding theory). They argue that mirror neurons may be important for understanding the actions of other people, and for learning new skills by imitation. Some researchers also speculate that mirror systems may simulate observed actions, and thus contribute to theory of mind skills, while others relate mirror neurons to language abilities. However, to date, no widely accepted neural or computational models have been put forward to describe how mirror neuron activity supports cognitive functions such as imitation. There are neuroscientists who caution that the claims being made for the role of mirror neurons are not supported by adequate research. Development In vertebrates, landmarks of embryonic neural development include the birth and differentiation of neurons from stem cell precursors, the migration of immature neurons from their birthplaces in the embryo to their final positions, outgrowth of axons from neurons and guidance of the motile growth cone through the embryo towards postsynaptic partners, the generation of synapses between these axons and their postsynaptic partners, and finally the lifelong changes in synapses which are thought to underlie learning and memory. All bilaterian animals at an early stage of development form a gastrula, which is polarized, with one end called the animal pole and the other the vegetal pole. The gastrula has the shape of a disk with three layers of cells, an inner layer called the endoderm, which gives rise to the lining of most internal organs, a middle layer called the mesoderm, which gives rise to the bones and muscles, and an outer layer called the ectoderm, which gives rise to the skin and nervous system. In vertebrates, the first sign of the nervous system is the appearance of a thin strip of cells along the center of the back, called the neural plate. The inner portion of the neural plate (along the midline) is destined to become the central nervous system (CNS), the outer portion the peripheral nervous system (PNS). As development proceeds, a fold called the neural groove appears along the midline. This fold deepens, and then closes up at the top. At this point the future CNS appears as a cylindrical structure called the neural tube, whereas the future PNS appears as two strips of tissue called the neural crest, running lengthwise above the neural tube. The sequence of stages from neural plate to neural tube and neural crest is known as neurulation. In the early 20th century, a set of famous experiments by Hans Spemann and Hilde Mangold showed that the formation of nervous tissue is "induced" by signals from a group of mesodermal cells called the organizer region. For decades, though, the nature of neural induction defeated every attempt to figure it out, until finally it was resolved by genetic approaches in the 1990s. Induction of neural tissue requires inhibition of the gene for a so-called bone morphogenetic protein, or BMP. Specifically the protein BMP4 appears to be involved. Two proteins called Noggin and Chordin, both secreted by the mesoderm, are capable of inhibiting BMP4 and thereby inducing ectoderm to turn into neural tissue. It appears that a similar molecular mechanism is involved for widely disparate types of animals, including arthropods as well as vertebrates. In some animals, however, another type of molecule called Fibroblast Growth Factor or FGF may also play an important role in induction. Induction of neural tissues causes formation of neural precursor cells, called neuroblasts. In Drosophila, neuroblasts divide asymmetrically, so that one product is a "ganglion mother cell" (GMC), and the other is a neuroblast. A GMC divides once, to give rise to either a pair of neurons or a pair of glial cells. In all, a neuroblast is capable of generating an indefinite number of neurons or glia. As shown in a 2008 study, one factor common to all bilateral organisms (including humans) is a family of secreted signaling molecules called neurotrophins which regulate the growth and survival of neurons. Zhu et al. identified DNT1, the first neurotrophin found in flies. DNT1 shares structural similarity with all known neurotrophins and is a key factor in the fate of neurons in Drosophila. Because neurotrophins have now been identified in both vertebrate and invertebrates, this evidence suggests that neurotrophins were present in an ancestor common to bilateral organisms and may represent a common mechanism for nervous system formation. Pathology The central nervous system is protected by major physical and chemical barriers. Physically, the brain and spinal cord are surrounded by tough meningeal membranes, and enclosed in the bones of the skull and vertebral column, which combine to form a strong physical shield. Chemically, the brain and spinal cord are isolated by the blood–brain barrier, which prevents most types of chemicals from moving from the bloodstream into the interior of the CNS. These protections make the CNS less susceptible in many ways than the PNS; the flip side, however, is that damage to the CNS tends to have more serious consequences. Although nerves tend to lie deep under the skin except in a few places such as the ulnar nerve near the elbow joint, they are still relatively exposed to physical damage, which can cause pain, loss of sensation, or loss of muscle control. Damage to nerves can also be caused by swelling or bruises at places where a nerve passes through a tight bony channel, as happens in carpal tunnel syndrome. If a nerve is completely transected, it will often regenerate, but for long nerves this process may take months to complete. In addition to physical damage, peripheral neuropathy may be caused by many other medical problems, including genetic conditions, metabolic conditions such as diabetes, inflammatory conditions such as Guillain–Barré syndrome, vitamin deficiency, infectious diseases such as leprosy or shingles, or poisoning by toxins such as heavy metals. Many cases have no cause that can be identified, and are referred to as idiopathic. It is also possible for nerves to lose function temporarily, resulting in numbness as stiffness—common causes include mechanical pressure, a drop in temperature, or chemical interactions with local anesthetic drugs such as lidocaine. Physical damage to the spinal cord may result in loss of sensation or movement. If an injury to the spine produces nothing worse than swelling, the symptoms may be transient, but if nerve fibers in the spine are actually destroyed, the loss of function is usually permanent. Experimental studies have shown that spinal nerve fibers attempt to regrow in the same way as nerve fibers, but in the spinal cord, tissue destruction usually produces scar tissue that cannot be penetrated by the regrowing nerves. Neurological practice draws heavily on the fields of neuroscience and psychiatry to treat diseases of the nervous system using various techniques of neurotherapy.
Biology and health sciences
Biology
null
21958
https://en.wikipedia.org/wiki/Nucleolus
Nucleolus
The nucleolus (; : nucleoli ) is the largest structure in the nucleus of eukaryotic cells. It is best known as the site of ribosome biogenesis. The nucleolus also participates in the formation of signal recognition particles and plays a role in the cell's response to stress. Nucleoli are made of proteins, DNA and RNA, and form around specific chromosomal regions called nucleolar organizing regions. Malfunction of the nucleolus is the cause of several human conditions called "nucleolopathies" and the nucleolus is being investigated as a target for cancer chemotherapy. History The nucleolus was identified by bright-field microscopy during the 1830s. Theodor Schwann in his 1939 treatise described that Schleiden had identified small corpuscles in nuclei, and named the structures "Kernkörperchen". In a 1947 translation of the work to English, the structure was named "nucleolus". Little was known about the function of the nucleolus until 1964, when a study of nucleoli by John Gurdon and Donald Brown in the African clawed frog Xenopus laevis generated increased interest in its function and detailed structure. They found that 25% of the frog eggs had no nucleolus, and that such eggs were not capable of life. Half of the eggs had one nucleolus and 25% had two. They concluded that the nucleolus had a function necessary for life. In 1966, Max L. Birnstiel and collaborators showed via nucleic acid hybridization experiments that DNA within nucleoli codes for ribosomal RNA. Structure Three major components of the nucleolus are recognized: the fibrillar center (FC), the dense fibrillar component (DFC), and the granular component (GC). Transcription of the rDNA occurs in the FC. The DFC contains the protein fibrillarin, which is important in rRNA processing. The GC contains the protein nucleophosmin, (B23 in the external image), which is also involved in ribosome biogenesis. However, it has been proposed that this particular organization is only observed in higher eukaryotes and that it evolved from a bipartite organization with the transition from anamniotes to amniotes. Reflecting the substantial increase in the DNA intergenic region, an original fibrillar component would have separated into the FC and the DFC. Another structure identified within many nucleoli (particularly in plants) is a clear area in the center of the structure referred to as a nucleolar vacuole. Nucleoli of various plant species have been shown to have very high concentrations of iron in contrast to human and animal cell nucleoli. The nucleolus ultrastructure can be seen through an electron microscope, while the organization and dynamics can be studied through fluorescent protein tagging and fluorescent recovery after photobleaching (FRAP). Antibodies against the PAF49 protein can also be used as a marker for the nucleolus in immunofluorescence experiments. Although usually only one or two nucleoli can be seen, a diploid human cell has ten nucleolus organizer regions (NORs) and could have more nucleoli. Most often multiple NORs participate in each nucleolus. Function and ribosome assembly In ribosome biogenesis, two of the three eukaryotic RNA polymerases (Pol I and Pol III) are required, and these function in a coordinated manner. In an initial stage, the rRNA genes are transcribed as a single unit within the nucleolus by RNA polymerase I. In order for this transcription to occur, several pol I-associated factors and DNA-specific trans-acting factors are required. In yeast, the most important are: UAF (upstream activating factor), TBP (TATA-box binding protein), and core binding factor (CBF), which bind promoter elements and form the preinitiation complex (PIC), which is in turn recognized by RNA polymerase. In humans, a similar PIC is assembled with SL1, the promoter selectivity factor (composed of TBP and TBP-associated factors, or TAFs), transcription initiation factors, and UBF (upstream binding factor). RNA polymerase I transcribes most rRNA transcripts (28S, 18S, and 5.8S), but the 5S rRNA subunit (component of the 60S ribosomal subunit) is transcribed by RNA polymerase III. Transcription of rRNA yields a long precursor molecule (45S pre-rRNA), which still contains the internal transcribed spacer (ITS) and external transcribed spacer (ETS). Further processing is needed to generate the 18S RNA, 5.8S, and 28S RNA molecules. In eukaryotes, the RNA-modifying enzymes are brought to their respective recognition sites by interaction with guide RNAs, which bind these specific sequences. These guide RNAs belong to the class of small nucleolar RNAs (snoRNAs), which are complexed with proteins and exist as small-nucleolar-ribonucleoproteins (snoRNPs). Once the rRNA subunits are processed, they are ready to be assembled into larger ribosomal subunits. However, an additional rRNA molecule, the 5S rRNA, is also necessary. In yeast, the 5S rDNA sequence is localized in the intergenic spacer and is transcribed in the nucleolus by RNA polymerase. In higher eukaryotes and plants, the situation is more complex, for the 5S DNA sequence lies outside the NOR and is transcribed by RNA Pol III in the nucleoplasm, after which it finds its way into the nucleolus to participate in the ribosome assembly. This assembly not only involves the rRNA, but also ribosomal proteins. The genes encoding these r-proteins are transcribed by Pol II in the nucleoplasm by a "conventional" pathway of protein synthesis (transcription, pre-mRNA processing, nuclear export of mature mRNA, and translation on cytoplasmic ribosomes). The mature r-proteins are then imported into the nucleus and, finally, the nucleolus. Association and maturation of rRNA and r-proteins result in the formation of the 40S (small) and 60S (large) subunits of the complete ribosome. These are exported through the nuclear pore complexes to the cytoplasm, where they remain free or become associated with the endoplasmic reticulum, forming the rough endoplasmic reticulum (RER). In human endometrial cells, a network of nucleolar channels is sometimes formed. The origin and function of this network have not yet been clearly identified. Sequestration of proteins In addition to its role in ribosomal biogenesis, the nucleolus is known to capture and immobilize proteins, a process known as nucleolar detention. Proteins that are detained in the nucleolus are unable to diffuse and to interact with their binding partners. Targets of this post-translational regulatory mechanism include VHL, PML, MDM2, POLD1, RelA, HAND1 and hTERT, among many others. It is now known that long noncoding RNAs originating from intergenic regions of the nucleolus are responsible for this phenomenon.
Biology and health sciences
Organelles
Biology
21961
https://en.wikipedia.org/wiki/Nucleon
Nucleon
In physics and chemistry, a nucleon is either a proton or a neutron, considered in its role as a component of an atomic nucleus. The number of nucleons in a nucleus defines the atom's mass number (nucleon number). Until the 1960s, nucleons were thought to be elementary particles, not made up of smaller parts. Now they are understood as composite particles, made of three quarks bound together by the strong interaction. The interaction between two or more nucleons is called internucleon interaction or nuclear force, which is also ultimately caused by the strong interaction. (Before the discovery of quarks, the term "strong interaction" referred to just internucleon interactions.) Nucleons sit at the boundary where particle physics and nuclear physics overlap. Particle physics, particularly quantum chromodynamics, provides the fundamental equations that describe the properties of quarks and of the strong interaction. These equations describe quantitatively how quarks can bind together into protons and neutrons (and all the other hadrons). However, when multiple nucleons are assembled into an atomic nucleus (nuclide), these fundamental equations become too difficult to solve directly (see lattice QCD). Instead, nuclides are studied within nuclear physics, which studies nucleons and their interactions by approximations and models, such as the nuclear shell model. These models can successfully describe nuclide properties, as for example, whether or not a particular nuclide undergoes radioactive decay. The proton and neutron are in a scheme of categories being at once fermions, hadrons and baryons. The proton carries a positive net charge, and the neutron carries a zero net charge; the proton's mass is only about 0.13% less than the neutron's. Thus, they can be viewed as two states of the same nucleon, and together form an isospin doublet (). In isospin space, neutrons can be transformed into protons and conversely by SU(2) symmetries. These nucleons are acted upon equally by the strong interaction, which is invariant under rotation in isospin space. According to Noether's theorem, isospin is conserved with respect to the strong interaction. Overview Properties Protons and neutrons are best known in their role as nucleons, i.e., as the components of atomic nuclei, but they also exist as free particles. Free neutrons are unstable, with a half-life of around 13 minutes, but they have important applications (see neutron radiation and neutron scattering). Protons not bound to other nucleons are the nuclei of hydrogen atoms when bound with an electron or if not bound to anything are ions or cosmic rays. Both the proton and the neutron are composite particles, meaning that each is composed of smaller parts, namely three quarks each; although once thought to be so, neither is an elementary particle. A proton is composed of two up quarks and one down quark, while the neutron has one up quark and two down quarks. Quarks are held together by the strong force, or equivalently, by gluons, which mediate the strong force at the quark level. An up quark has electric charge  e, and a down quark has charge  e, so the summed electric charges of proton and neutron are +e and 0, respectively. Thus, the neutron has a charge of 0 (zero), and therefore is electrically neutral; indeed, the term "neutron" comes from the fact that a neutron is electrically neutral. The masses of the proton and neutron are similar: for the proton it is (), while for the neutron it is (); the neutron is roughly 0.13% heavier. The similarity in mass can be explained roughly by the slight difference in masses of up quarks and down quarks composing the nucleons. However, a detailed description remains an unsolved problem in particle physics. The spin of the nucleon is , which means that they are fermions and, like electrons, are subject to the Pauli exclusion principle: no more than one nucleon, e.g. in an atomic nucleus, may occupy the same quantum state. The isospin and spin quantum numbers of the nucleon have two states each, resulting in four combinations in total. An alpha particle is composed of four nucleons occupying all four combinations, namely, it has two protons (having opposite spin) and two neutrons (also having opposite spin), and its net nuclear spin is zero. In larger nuclei constituent nucleons, by Pauli exclusion, are compelled to have relative motion, which may also contribute to nuclear spin via the orbital quantum number. They spread out into nuclear shells analogous to electron shells known from chemistry. Both the proton and neutron have magnetic moments, though the nucleon magnetic moments are anomalous and were unexpected when they were discovered in the 1930s. The proton's magnetic moment, symbol μ, is , whereas, if the proton were an elementary Dirac particle, it should have a magnetic moment of . Here the unit for the magnetic moments is the nuclear magneton, symbol μ, an atomic-scale unit of measure. The neutron's magnetic moment is μ = , whereas, since the neutron lacks an electric charge, it should have no magnetic moment. The value of the neutron's magnetic moment is negative because the direction of the moment is opposite to the neutron's spin. The nucleon magnetic moments arise from the quark substructure of the nucleons. The proton magnetic moment is exploited for NMR / MRI scanning. Stability A neutron in free state is an unstable particle, with a half-life around ten minutes. It undergoes decay (a type of radioactive decay) by turning into a proton while emitting an electron and an electron antineutrino. This reaction can occur because the mass of the neutron is slightly greater than that of the proton. (See the Neutron article for more discussion of neutron decay.) A proton by itself is thought to be stable, or at least its lifetime is too long to measure. This is an important discussion in particle physics (see Proton decay). Inside a nucleus, on the other hand, combined protons and neutrons (nucleons) can be stable or unstable depending on the nuclide, or nuclear species. Inside some nuclides, a neutron can turn into a proton (producing other particles) as described above; the reverse can happen inside other nuclides, where a proton turns into a neutron (producing other particles) through decay or electron capture. And inside still other nuclides, both protons and neutrons are stable and do not change form. Antinucleons Both nucleons have corresponding antiparticles: the antiproton and the antineutron, which have the same mass and opposite charge as the proton and neutron respectively, and they interact in the same way. (This is generally believed to be exactly true, due to CPT symmetry. If there is a difference, it is too small to measure in all experiments to date.) In particular, antinucleons can bind into an "antinucleus". So far, scientists have created antideuterium and antihelium-3 nuclei. Tables of detailed properties Nucleons The masses of the proton and neutron are known with far greater precision in daltons (Da) than in MeV/c2 due to the way in which these are defined. The conversion factor used is 1 Da = . At least 1035 years. See proton decay. For free neutrons; in most common nuclei, neutrons are stable. The masses of their antiparticles are assumed to be identical, and no experiments have refuted this to date. Current experiments show any relative difference between the masses of the proton and antiproton must be less than and the difference between the neutron and antineutron masses is on the order of . Nucleon resonances Nucleon resonances are excited states of nucleon particles, often corresponding to one of the quarks having a flipped spin state, or with different orbital angular momentum when the particle decays. Only resonances with a 3- or 4-star rating at the Particle Data Group (PDG) are included in this table. Due to their extraordinarily short lifetimes, many properties of these particles are still under investigation. The symbol format is given as N() , where is the particle's approximate mass, is the orbital angular momentum (in the spectroscopic notation) of the nucleon–meson pair, produced when it decays, and and are the particle's isospin and total angular momentum respectively. Since nucleons are defined as having isospin, the first number will always be 1, and the second number will always be odd. When discussing nucleon resonances, sometimes the N is omitted and the order is reversed, in the form (); for example, a proton can be denoted as "N(939) S11" or "S11 (939)". The table below lists only the base resonance; each individual entry represents 4 baryons: 2 nucleon resonances particles and their 2 antiparticles. Each resonance exists in a form with a positive electric charge (), with a quark composition of like the proton, and a neutral form, with a quark composition of like the neutron, as well as the corresponding antiparticles with antiquark compositions of and respectively. Since they contain no strange, charm, bottom, or top quarks, these particles do not possess strangeness, etc. The table only lists the resonances with an isospin = . For resonances with isospin = , see the article on Delta baryons. † The P11(939) nucleon represents the excited state of a normal proton or neutron. Such a particle may be stable when in an atomic nucleus, e.g. in lithium-6. Quark model classification In the quark model with SU(2) flavour, the two nucleons are part of the ground-state doublet. The proton has quark content of uud, and the neutron, udd. In SU(3) flavour, they are part of the ground-state octet (8) of spin- baryons, known as the Eightfold way. The other members of this octet are the hyperons strange isotriplet , , , the and the strange isodoublet , . One can extend this multiplet in SU(4) flavour (with the inclusion of the charm quark) to the ground-state 20-plet, or to SU(6) flavour (with the inclusion of the top and bottom quarks) to the ground-state 56-plet. The article on isospin provides an explicit expression for the nucleon wave functions in terms of the quark flavour eigenstates. Models Although it is known that the nucleon is made from three quarks, , it is not known how to solve the equations of motion for quantum chromodynamics. Thus, the study of the low-energy properties of the nucleon are performed by means of models. The only first-principles approach available is to attempt to solve the equations of QCD numerically, using lattice QCD. This requires complicated algorithms and very powerful supercomputers. However, several analytic models also exist: Skyrmion models The skyrmion models the nucleon as a topological soliton in a nonlinear SU(2) pion field. The topological stability of the skyrmion is interpreted as the conservation of baryon number, that is, the non-decay of the nucleon. The local topological winding number density is identified with the local baryon number density of the nucleon. With the pion isospin vector field oriented in the shape of a hedgehog space, the model is readily solvable, and is thus sometimes called the hedgehog model. The hedgehog model is able to predict low-energy parameters, such as the nucleon mass, radius and axial coupling constant, to approximately 30% of experimental values. MIT bag model The MIT bag model confines quarks and gluons interacting through quantum chromodynamics to a region of space determined by balancing the pressure exerted by the quarks and gluons against a hypothetical pressure exerted by the vacuum on all colored quantum fields. The simplest approximation to the model confines three non-interacting quarks to a spherical cavity, with the boundary condition that the quark vector current vanish on the boundary. The non-interacting treatment of the quarks is justified by appealing to the idea of asymptotic freedom, whereas the hard-boundary condition is justified by quark confinement. Mathematically, the model vaguely resembles that of a radar cavity, with solutions to the Dirac equation standing in for solutions to the Maxwell equations, and the vanishing vector current boundary condition standing for the conducting metal walls of the radar cavity. If the radius of the bag is set to the radius of the nucleon, the bag model predicts a nucleon mass that is within 30% of the actual mass. Although the basic bag model does not provide a pion-mediated interaction, it describes excellently the nucleon–nucleon forces through the 6 quark bag s-channel mechanism using the P-matrix. Chiral bag model The chiral bag model merges the MIT bag model and the skyrmion model. In this model, a hole is punched out of the middle of the skyrmion and replaced with a bag model. The boundary condition is provided by the requirement of continuity of the axial vector current across the bag boundary. Very curiously, the missing part of the topological winding number (the baryon number) of the hole punched into the skyrmion is exactly made up by the non-zero vacuum expectation value (or spectral asymmetry) of the quark fields inside the bag. , this remarkable trade-off between topology and the spectrum of an operator does not have any grounding or explanation in the mathematical theory of Hilbert spaces and their relationship to geometry. Several other properties of the chiral bag are notable: It provides a better fit to the low-energy nucleon properties, to within 5–10%, and these are almost completely independent of the chiral-bag radius, as long as the radius is less than the nucleon radius. This independence of radius is referred to as the Cheshire Cat principle, after the fading of Lewis Carroll's Cheshire Cat to just its smile. It is expected that a first-principles solution of the equations of QCD will demonstrate a similar duality of quark–meson descriptions.
Physical sciences
Nuclear physics
Physics
21989
https://en.wikipedia.org/wiki/Nitrogen%20fixation
Nitrogen fixation
Nitrogen fixation is a chemical process by which molecular dinitrogen () is converted into ammonia (). It occurs both biologically and abiologically in chemical industries. Biological nitrogen fixation or diazotrophy is catalyzed by enzymes called nitrogenases. These enzyme complexes are encoded by the Nif genes (or Nif homologs) and contain iron, often with a second metal (usually molybdenum, but sometimes vanadium). Some nitrogen-fixing bacteria have symbiotic relationships with plants, especially legumes, mosses and aquatic ferns such as Azolla. Looser non-symbiotic relationships between diazotrophs and plants are often referred to as associative, as seen in nitrogen fixation on rice roots. Nitrogen fixation occurs between some termites and fungi. It occurs naturally in the air by means of NOx production by lightning. Nitrogen fixation is essential to life on Earth because fixed inorganic nitrogen compounds are required for the biosynthesis of all nitrogen-containing organic compounds such as amino acids, polypeptides and proteins, nucleoside triphosphates and nucleic acids. As part of the nitrogen cycle, it is essential for soil fertility and the growth of terrestrial and semiaquatic vegetations, upon which all consumers of those ecosystems rely for biomass. Nitrogen fixation is thus crucial to the food security of human societies in sustaining agricultural yields (especially staple crops), livestock feeds (forage or fodder) and fishery (both wild and farmed) harvests. It is also indirectly relevant to the manufacture of all nitrogenous industrial products, which include fertilizers, pharmaceuticals, textiles, dyes and explosives. History Biological nitrogen fixation was discovered by Jean-Baptiste Boussingault in 1838. Later, in 1880, the process by which it happens was discovered by German agronomist Hermann Hellriegel and and was fully described by Dutch microbiologist Martinus Beijerinck. "The protracted investigations of the relation of plants to the acquisition of nitrogen begun by de Saussure, Ville, Lawes, Gilbert and others, and culminated in the discovery of symbiotic fixation by Hellriegel and Wilfarth in 1887." "Experiments by Bossingault in 1855 and Pugh, Gilbert & Lawes in 1887 had shown that nitrogen did not enter the plant directly. The discovery of the role of nitrogen fixing bacteria by Herman Hellriegel and Herman Wilfarth in 1886–1888 would open a new era of soil science." In 1901, Beijerinck showed that Azotobacter chroococcum was able to fix atmospheric nitrogen. This was the first species of the azotobacter genus, so-named by him. It is also the first known diazotroph, species that use diatomic nitrogen as a step in the complete nitrogen cycle. Biological Biological nitrogen fixation (BNF) occurs when atmospheric nitrogen is converted to ammonia by a nitrogenase enzyme. The overall reaction for BNF is: → The process is coupled to the hydrolysis of 16 equivalents of ATP and is accompanied by the co-formation of one equivalent of . The conversion of into ammonia occurs at a metal cluster called FeMoco, an abbreviation for the iron-molybdenum cofactor. The mechanism proceeds via a series of protonation and reduction steps wherein the FeMoco active site hydrogenates the substrate. In free-living diazotrophs, nitrogenase-generated ammonia is assimilated into glutamate through the glutamine synthetase/glutamate synthase pathway. The microbial nif genes required for nitrogen fixation are widely distributed in diverse environments. For example, decomposing wood, which generally has a low nitrogen content, has been shown to host a diazotrophic community. The bacteria enrich the wood substrate with nitrogen through fixation, thus enabling deadwood decomposition by fungi. Nitrogenases are rapidly degraded by oxygen. For this reason, many bacteria cease production of the enzyme in the presence of oxygen. Many nitrogen-fixing organisms exist only in anaerobic conditions, respiring to draw down oxygen levels, or binding the oxygen with a protein such as leghemoglobin. Importance of nitrogen Atmospheric nitrogen is inaccessible to most organisms, because its triple covalent bond is very strong. Most take up fixed nitrogen from various sources. For every 100 atoms of carbon, roughly 2 to 20 atoms of nitrogen are assimilated. The atomic ratio of carbon (C) : nitrogen (N) : phosphorus (P) observed on average in planktonic biomass was originally described by Alfred Redfield, who determined the stoichiometric relationship between C:N:P atoms, The Redfield Ratio, to be 106:16:1. Nitrogenase The protein complex nitrogenase is responsible for catalyzing the reduction of nitrogen gas (N2) to ammonia (NH3). In cyanobacteria, this enzyme system is housed in a specialized cell called the heterocyst. The production of the nitrogenase complex is genetically regulated, and the activity of the protein complex is dependent on ambient oxygen concentrations, and intra- and extracellular concentrations of ammonia and oxidized nitrogen species (nitrate and nitrite). Additionally, the combined concentrations of both ammonium and nitrate are thought to inhibit NFix, specifically when intracellular concentrations of 2-oxoglutarate (2-OG) exceed a critical threshold. The specialized heterocyst cell is necessary for the performance of nitrogenase as a result of its sensitivity to ambient oxygen. Nitrogenase consist of two proteins, a catalytic iron-dependent protein, commonly referred to as MoFe protein and a reducing iron-only protein (Fe protein). There are three different iron dependent proteins, molybdenum-dependent, vanadium-dependent, and iron-only, with all three nitrogenase protein variations containing an iron protein component. Molybdenum-dependent nitrogenase is the most commonly present nitrogenase. The different types of nitrogenase can be determined by the specific iron protein component. Nitrogenase is highly conserved. Gene expression through DNA sequencing can distinguish which protein complex is present in the microorganism and potentially being expressed. Most frequently, the nifH gene is used to identify the presence of molybdenum-dependent nitrogenase, followed by closely related nitrogenase reductases (component II) vnfH and anfH representing vanadium-dependent and iron-only nitrogenase, respectively. In studying the ecology and evolution of nitrogen-fixing bacteria, the nifH gene is the biomarker most widely used. nifH has two similar genes anfH and vnfH that also encode for the nitrogenase reductase component of the nitrogenase complex. Evolution of Nitrogenase Nitrogenase is thought to have evolved sometime between 1.5-2.2 billion years ago (Ga), although some isotopic support showing nitrogenase evolution as early as around 3.2 Ga. Nitrogenase appears to have evolved from maturase-like proteins, although the function of the preceding protein is currently unknown. Nitrogenase has three different forms (Nif, Anf, and Vnf) that correspond with the metal found in the active site of the protein (Molybdenum, Iron, and Vanadium respectively). Marine metal abundances over Earth’s geologic timeline are thought to have driven the relative abundance of which form of nitrogenase was most common. Currently, there is no conclusive agreement on which form of nitrogenase arose first. Microorganisms Diazotrophs are widespread within domain Bacteria including cyanobacteria (e.g. the highly significant Trichodesmium and Cyanothece), green sulfur bacteria, purple sulfur bacteria, Azotobacteraceae, rhizobia and Frankia. Several obligately anaerobic bacteria fix nitrogen including many (but not all) Clostridium spp. Some archaea such as Methanosarcina acetivorans also fix nitrogen, and several other methanogenic taxa, are significant contributors to nitrogen fixation in oxygen-deficient soils. Cyanobacteria, commonly known as blue-green algae, inhabit nearly all illuminated environments on Earth and play key roles in the carbon and nitrogen cycle of the biosphere. In general, cyanobacteria can use various inorganic and organic sources of combined nitrogen, such as nitrate, nitrite, ammonium, urea, or some amino acids. Several cyanobacteria strains are also capable of diazotrophic growth, an ability that may have been present in their last common ancestor in the Archean eon. Nitrogen fixation not only naturally occurs in soils but also aquatic systems, including both freshwater and marine. Indeed, the amount of nitrogen fixed in the ocean is at least as much as that on land. The colonial marine cyanobacterium Trichodesmium is thought to fix nitrogen on such a scale that it accounts for almost half of the nitrogen fixation in marine systems globally. Marine surface lichens and non-photosynthetic bacteria belonging in Proteobacteria and Planctomycetes fixate significant atmospheric nitrogen. Species of nitrogen fixing cyanobacteria in fresh waters include: Aphanizomenon and Dolichospermum (previously Anabaena). Such species have specialized cells called heterocytes, in which nitrogen fixation occurs via the nitrogenase enzyme. Algae One type of organelle can turn nitrogen gas into a biologically available form. This nitroplast was discovered in algae. Root nodule symbioses Legume family Plants that contribute to nitrogen fixation include those of the legume family—Fabaceae— with taxa such as kudzu, clover, soybean, alfalfa, lupin, peanut and rooibos. They contain symbiotic rhizobia bacteria within nodules in their root systems, producing nitrogen compounds that help the plant to grow and compete with other plants. When the plant dies, the fixed nitrogen is released, making it available to other plants; this helps to fertilize the soil. The great majority of legumes have this association, but a few genera (e.g., Styphnolobium) do not. In many traditional farming practices, fields are rotated through various types of crops, which usually include one consisting mainly or entirely of clover. Fixation efficiency in soil is dependent on many factors, including the legume and air and soil conditions. For example, nitrogen fixation by red clover can range from . Non-leguminous The ability to fix nitrogen in nodules is present in actinorhizal plants such as alder and bayberry, with the help of Frankia bacteria. They are found in 25 genera in the orders Cucurbitales, Fagales and Rosales, which together with the Fabales form a nitrogen-fixing clade of eurosids. The ability to fix nitrogen is not universally present in these families. For example, of 122 Rosaceae genera, only four fix nitrogen. Fabales were the first lineage to branch off this nitrogen-fixing clade; thus, the ability to fix nitrogen may be plesiomorphic and subsequently lost in most descendants of the original nitrogen-fixing plant; however, it may be that the basic genetic and physiological requirements were present in an incipient state in the most recent common ancestors of all these plants, but only evolved to full function in some of them. In addition, Trema (Parasponia), a tropical genus in the family Cannabaceae, is unusually able to interact with rhizobia and form nitrogen-fixing nodules. Other plant symbionts Some other plants live in association with a cyanobiont (cyanobacteria such as Nostoc) which fix nitrogen for them: Some lichens such as Lobaria and Peltigera Mosquito fern (Azolla species) Cycads Gunnera Blasia (liverwort) Hornworts Some symbiotic relationships involving agriculturally-important plants are: Sugarcane and unclear endophytes Foxtail millet and Azospirillum brasilense Kallar grass and Azoarcus sp. strain BH72 Rice and Herbaspirillum seropedicae Wheat and Klebsiella pneumoniae Maize landrace 'Sierra Mixe' / 'olotón' and various Bacteroidota and Pseudomonadota Industrial processes Historical A method for nitrogen fixation was first described by Henry Cavendish in 1784 using electric arcs reacting nitrogen and oxygen in air. This method was implemented in the Birkeland–Eyde process of 1903. The fixation of nitrogen by lightning is a very similar natural occurring process. The possibility that atmospheric nitrogen reacts with certain chemicals was first observed by Desfosses in 1828. He observed that mixtures of alkali metal oxides and carbon react with nitrogen at high temperatures. With the use of barium carbonate as starting material, the first commercial process became available in the 1860s, developed by Margueritte and Sourdeval. The resulting barium cyanide reacts with steam, yielding ammonia. In 1898 Frank and Caro developed what is known as the Frank–Caro process to fix nitrogen in the form of calcium cyanamide. The process was eclipsed by the Haber process, which was discovered in 1909. Haber process The dominant industrial method for producing ammonia is the Haber process also known as the Haber-Bosch process. Fertilizer production is now the largest source of human-produced fixed nitrogen in the terrestrial ecosystem. Ammonia is a required precursor to fertilizers, explosives, and other products. The Haber process requires high pressures (around 200 atm) and high temperatures (at least 400 °C), which are routine conditions for industrial catalysis. This process uses natural gas as a hydrogen source and air as a nitrogen source. The ammonia product has resulted in an intensification of nitrogen fertilizer globally and is credited with supporting the expansion of the human population from around 2 billion in the early 20th century to roughly 8 billion people now. Homogeneous catalysis Much research has been conducted on the discovery of catalysts for nitrogen fixation, often with the goal of lowering energy requirements. However, such research has thus far failed to approach the efficiency and ease of the Haber process. Many compounds react with atmospheric nitrogen to give dinitrogen complexes. The first dinitrogen complex to be reported was ()2+. Some soluble complexes do catalyze nitrogen fixation. Lightning Nitrogen can be fixed by lightning converting nitrogen gas () and oxygen gas () in the atmosphere into (nitrogen oxides). The molecule is highly stable and nonreactive due to the triple bond between the nitrogen atoms. Lightning produces enough energy and heat to break this bond allowing nitrogen atoms to react with oxygen, forming . These compounds cannot be used by plants, but as this molecule cools, it reacts with oxygen to form , which in turn reacts with water to produce (nitrous acid) or (nitric acid). When these acids seep into the soil, they make NO3− (nitrate), which is of use to plants.
Technology
Soil and soil management
null
22028
https://en.wikipedia.org/wiki/Nephrology
Nephrology
Nephrology () is a specialty for both adult internal medicine and pediatric medicine that concerns the study of the kidneys, specifically normal kidney function (renal physiology) and kidney disease (renal pathophysiology), the preservation of kidney health, and the treatment of kidney disease, from diet and medication to renal replacement therapy (dialysis and kidney transplantation). The word "renal" is an adjective meaning "relating to the kidneys", and its roots are French or late Latin. Whereas according to some opinions, "renal" and "nephro" should be replaced with "kidney" in scientific writings such as "kidney medicine" (instead of nephrology) or "kidney replacement therapy", other experts have advocated preserving the use of renal and nephro as appropriate including in "nephrology" and "renal replacement therapy", respectively. Nephrology also studies systemic conditions that affect the kidneys, such as diabetes and autoimmune disease; and systemic diseases that occur as a result of kidney disease, such as renal osteodystrophy and hypertension. A physician who has undertaken additional training and become certified in nephrology is called a nephrologist. The term "nephrology" was first used in about 1960, according to the French "néphrologie" proposed by Pr. Jean Hamburger in 1953, from the Greek / nephrós (kidney). Before then, the specialty was usually referred to as "kidney medicine". Scope Nephrology concerns the diagnosis and treatment of kidney diseases, including electrolyte disturbances and hypertension, and the care of those requiring renal replacement therapy, including dialysis and renal transplant patients. The word 'dialysis' is from the mid-19th century: via Latin from the Greek word 'dialusis'; from 'dialuein' (split, separate), from 'dia' (apart) and 'luein' (set free). In other words, dialysis replaces the primary (excretory) function of the kidney, which separates (and removes) excess toxins and water from the blood, placing them in the urine. Many diseases affecting the kidney are systemic disorders not limited to the organ itself, and may require special treatment. Examples include acquired conditions such as systemic vasculitides (e.g. ANCA vasculitis) and autoimmune diseases (e.g. lupus), as well as congenital or genetic conditions such as polycystic kidney disease. Patients are referred to nephrology specialists after a urinalysis, for various reasons, such as acute kidney injury, chronic kidney disease, hematuria, proteinuria, kidney stones, hypertension, and disorders of acid/base or electrolytes. Nephrologist A nephrologist is a physician who specializes in the care and treatment of kidney disease. Nephrology requires additional training to become an expert with advanced skills. Nephrologists may provide care to people without kidney problems and may work in general/internal medicine, transplant medicine, immunosuppression management, intensive care medicine, clinical pharmacology, perioperative medicine, or pediatric nephrology. Nephrologists may further sub-specialise in dialysis, kidney transplantation, home therapies (home dialysis), cancer-related kidney diseases (onco-nephrology), structural kidney diseases (uro-nephrology), procedural nephrology or other non-nephrology areas as described above. Procedures a nephrologist may perform include native kidney and transplant kidney biopsy, dialysis access insertion (temporary vascular access lines, tunnelled vascular access lines, peritoneal dialysis access lines), fistula management (angiographic or surgical fistulogram and plasty), and bone biopsy . Bone biopsies are now unusual. Training India To become a nephrologist in India, one has to complete an MBBS (5 and 1/2 years) degree, followed by an MD/DNB (3 years) either in medicine or paediatrics, followed by a DM/DNB (3 years) course in either nephrology or paediatric nephrology. Australia and New Zealand Nephrology training in Australia and New Zealand typically includes completion of a medical degree (Bachelor of Medicine, Bachelor of Surgery: 4–6 years), internship (1 year), Basic Physician Training (3 years minimum), successful completion of the Royal Australasian College of Physicians written and clinical examinations, and Advanced Physician Training in Nephrology (3 years). The training pathway is overseen and accredited by the Royal Australasian College of Physicians, though the application process varies across states. Completion of a post-graduate degree (usually a PhD) in a nephrology research interest (3–4 years) is optional but increasingly common. Finally, many Australian and New Zealand nephrologists participate in career-long professional and personal development through bodies such as the Australian and New Zealand Society of Nephrology and the Transplant Society of Australia and New Zealand. United Kingdom In the United Kingdom, nephrology (often called renal medicine) is a subspecialty of general medicine. A nephrologist has completed medical school, foundation year posts (FY1 and FY2) and core medical training (CMT), specialist training (ST) and passed the Membership of the Royal College of Physicians (MRCP) exam before competing for a National Training Number (NTN) in renal medicine. The typical Specialty Training (when they are called a registrar, or an ST) is five years and leads to a Certificate of Completion of Training (CCT) in both renal medicine and general (internal) medicine. In those five years, they usually rotate yearly between hospitals in a region (known as a deanery). They are then accepted on to the Specialist Register of the General Medical Council (GMC). Specialty trainees often interrupt their clinical training to obtain research degrees (MD/PhD). After achieving CCT, the registrar (ST) may apply for a permanent post as Consultant in Renal Medicine. Subsequently, some Consultants practice nephrology alone. Others work in this area, and in Intensive Care (ICU), or General (Internal) or Acute Medicine. United States Nephrology training can be accomplished through one of two routes. The first path way is through an internal medicine pathway leading to an Internal Medicine/Nephrology specialty, and sometimes known as "adult nephrology". The second pathway is through Pediatrics leading to a speciality in Pediatric Nephrology. In the United States, after medical school adult nephrologists complete a three-year residency in internal medicine followed by a two-year (or longer) fellowship in nephrology. Complementary to an adult nephrologist, a pediatric nephrologist will complete a three-year pediatric residency after medical school or a four-year Combined Internal Medicine and Pediatrics residency. This is followed by a three-year fellowship in Pediatric Nephrology. Once training is satisfactorily completed, the physician is eligible to take the American Board of Internal Medicine (ABIM) or American Osteopathic Board of Internal Medicine (AOBIM) nephrology examination. Nephrologists must be approved by one of these boards. To be approved, the physician must fulfill the requirements for education and training in nephrology in order to qualify to take the board's examination. If a physician passes the examination, then he or she can become a nephrology specialist. Typically, nephrologists also need two to three years of training in an ACGME or AOA accredited fellowship in nephrology. Nearly all programs train nephrologists in continuous renal replacement therapy; fewer than half in the United States train in the provision of plasmapheresis. Only pediatric trained physicians are able to train in pediatric nephrology, and internal medicine (adult) trained physicians may enter general (adult) nephrology fellowships. Diagnosis History and physical examination are central to the diagnostic workup in nephrology. The history typically includes the present illness, family history, general medical history, diet, medication use, drug use and occupation. The physical examination typically includes an assessment of volume state, blood pressure, heart, lungs, peripheral arteries, joints, abdomen and flank. A rash may be relevant too, especially as an indicator of autoimmune disease. Examination of the urine (urinalysis) allows a direct assessment for possible kidney problems, which may be suggested by appearance of blood in the urine (hematuria), protein in the urine (proteinuria), pus cells in the urine (pyuria) or cancer cells in the urine. A 24-hour urine collection used to be used to quantify daily protein loss (see proteinuria), urine output, creatinine clearance or electrolyte handling by the renal tubules. It is now more common to measure protein loss from a small random sample of urine. Basic blood tests can be used to check the concentration of hemoglobin, white count, platelets, sodium, potassium, chloride, bicarbonate, urea, creatinine, albumin, calcium, magnesium, phosphate, alkaline phosphatase and parathyroid hormone (PTH) in the blood. All of these may be affected by kidney problems. The serum creatinine concentration is the most important blood test as it is used to estimate the function of the kidney, called the creatinine clearance or estimated glomerular filtration rate (GFR). It is a good idea for patients with longterm kidney disease to know an up-to-date list of medications, and their latest blood tests, especially the blood creatinine level. In the United Kingdom, blood tests can monitored online by the patient, through a website called RenalPatientView. More specialized tests can be ordered to discover or link certain systemic diseases to kidney failure such as infections (hepatitis B, hepatitis C), autoimmune conditions (systemic lupus erythematosus, ANCA vasculitis), paraproteinemias (amyloidosis, multiple myeloma) and metabolic diseases (diabetes, cystinosis). Structural abnormalities of the kidneys are identified with imaging tests. These may include Medical ultrasonography/ultrasound, computed axial tomography (CT), scintigraphy (nuclear medicine), angiography or magnetic resonance imaging (MRI). In certain circumstances, less invasive testing may not provide a certain diagnosis. Where definitive diagnosis is required, a biopsy of the kidney (renal biopsy) may be performed. This typically involves the insertion, under local anaesthetic and ultrasound or CT guidance, of a core biopsy needle into the kidney to obtain a small sample of kidney tissue. The kidney tissue is then examined under a microscope, allowing direct visualization of the changes occurring within the kidney. Additionally, the pathology may also stage a problem affecting the kidney, allowing some degree of prognostication. In some circumstances, kidney biopsy will also be used to monitor response to treatment and identify early relapse. A transplant kidney biopsy may also be performed to look for rejection of the kidney. Treatment Treatments in nephrology can include medications, blood products, surgical interventions (urology, vascular or surgical procedures), renal replacement therapy (dialysis or kidney transplantation) and plasma exchange. Kidney problems can have significant impact on quality and length of life, and so psychological support, health education and advanced care planning play key roles in nephrology. Chronic kidney disease is typically managed with treatment of causative conditions (such as diabetes), avoidance of substances toxic to the kidneys (nephrotoxins like radiologic contrast and non-steroidal anti-inflammatory drugs), antihypertensives, diet and weight modification and planning for end-stage kidney failure. Impaired kidney function has systemic effects on the body. An erythropoetin stimulating agent (ESA) may be required to ensure adequate production of red blood cells, activated vitamin D supplements and phosphate binders may be required to counteract the effects of kidney failure on bone metabolism, and blood volume and electrolyte disturbance may need correction. Diuretics (such as furosemide) may be used to correct fluid overload, and alkalis (such as sodium bicarbonate) can be used to treat metabolic acidosis. Auto-immune and inflammatory kidney disease, such as vasculitis or transplant rejection, may be treated with immunosuppression. Commonly used agents are prednisone, mycophenolate, cyclophosphamide, ciclosporin, tacrolimus, everolimus, thymoglobulin and sirolimus. Newer, so-called "biologic drugs" or monoclonal antibodies, are also used in these conditions and include rituximab, basiliximab and eculizumab. Blood products including intravenous immunoglobulin and a process known as plasma exchange can also be employed. When the kidneys are no longer able to sustain the demands of the body, end-stage kidney failure is said to have occurred. Without renal replacement therapy, death from kidney failure will eventually result. Dialysis is an artificial method of replacing some kidney function to prolong life. Renal transplantation replaces kidney function by inserting into the body a healthier kidney from an organ donor and inducing immunologic tolerance of that organ with immunosuppression. At present, renal transplantation is the most effective treatment for end-stage kidney failure although its worldwide availability is limited by lack of availability of donor organs. Generally speaking, kidneys from living donors are 'better' than those from deceased donors, as they last longer. Most kidney conditions are chronic conditions and so long term followup with a nephrologist is usually necessary. In the United Kingdom, care may be shared with the patient's primary care physician, called a General Practitioner (GP). Organizations The world's first society of nephrology was the French 'Societe de Pathologie Renale'. Its first president was Jean Hamburger, and its first meeting was in Paris in February 1949. In 1959, Hamburger also founded the 'Société de Néphrologie', as a continuation of the older society. It is now called Francophone Society of Nephrology, Dialysis and Transplantation (SFNDT). The second society of nephrologists, the UK Kidney Association (UKKA) was founded in 1950, originally named the Renal Association. Its first president was Arthur Osman and met for the first time, in London, on 30 March 1950. The Società di Nefrologia Italiana was founded in 1957 and was the first national society to incorporate the phrase nephrologia (or nephrology) into its name. The word 'nephrology' appeared for the first time in a conference, on 1–4 September 1960 at the "Premier Congrès International de Néphrologie" in Evian and Geneva, the first meeting of the International Society of Nephrology (ISN, International Society of Nephrology). The first day (1.9.60) was in Geneva and the next three (2–4.9.60) were in Evian, France. The early history of the ISN is described by Robinson and Richet in 2005 and the later history by Barsoum in 2011. The ISN is the largest global society representing medical professionals engaged in advancing kidney care worldwide. It has an international office in Brussels, Belgium. In the US, founded in 1964, the National Kidney Foundation is a national organization representing patients and professionals who treat kidney diseases. Founded in 1966, the American Society of Nephrology (ASN) is the world's largest professional society devoted to the study of kidney disease. The American Nephrology Nurses' Association (ANNA), founded in 1969, promotes excellence in and appreciation of nephrology nursing to make a positive difference for patients with kidney disease. The American Association of Kidney Patients (AAKP) is a non-profit, patient-centric group focused on improving the health and well-being of CKD and dialysis patients. The National Renal Administrators Association (NRAA), founded in 1977, is a national organization that represents and supports the independent and community-based dialysis providers. The American Kidney Fund directly provides financial support to patients in need, as well as participating in health education and prevention efforts. ASDIN (American Society of Diagnostic and Interventional Nephrology) is the main organization of interventional nephrologists. Other organizations include CIDA, VASA etc. which deal with dialysis vascular access. The Renal Support Network (RSN) is a nonprofit, patient-focused, patient-run organization that provides non-medical services to those affected by chronic kidney disease (CKD). In the United Kingdom, UK National Kidney Federation and Kidney Care UK (previously known as British Kidney Patient Association, BKPA) represent patients, and the UK Kidney Association used to represent renal physicians and worked closely with a previous NHS policy directive called a National Service Framework for kidney disease.
Biology and health sciences
Fields of medicine
Health
22054
https://en.wikipedia.org/wiki/Nuclear%20fission
Nuclear fission
Nuclear fission is a reaction in which the nucleus of an atom splits into two or more smaller nuclei. The fission process often produces gamma photons, and releases a very large amount of energy even by the energetic standards of radioactive decay. Nuclear fission was discovered by chemists Otto Hahn and Fritz Strassmann and physicists Lise Meitner and Otto Robert Frisch. Hahn and Strassmann proved that a fission reaction had taken place on 19 December 1938, and Meitner and her nephew Frisch explained it theoretically in January 1939. Frisch named the process "fission" by analogy with biological fission of living cells. In their second publication on nuclear fission in February 1939, Hahn and Strassmann predicted the existence and liberation of additional neutrons during the fission process, opening up the possibility of a nuclear chain reaction. For heavy nuclides, it is an exothermic reaction which can release large amounts of energy both as electromagnetic radiation and as kinetic energy of the fragments (heating the bulk material where fission takes place). Like nuclear fusion, for fission to produce energy, the total binding energy of the resulting elements must be greater than that of the starting element. Fission is a form of nuclear transmutation because the resulting fragments (or daughter atoms) are not the same element as the original parent atom. The two (or more) nuclei produced are most often of comparable but slightly different sizes, typically with a mass ratio of products of about 3 to 2, for common fissile isotopes. Most fissions are binary fissions (producing two charged fragments), but occasionally (2 to 4 times per 1000 events), three positively charged fragments are produced, in a ternary fission. The smallest of these fragments in ternary processes ranges in size from a proton to an argon nucleus. Apart from fission induced by an exogenous neutron, harnessed and exploited by humans, a natural form of spontaneous radioactive decay (not requiring an exogenous neutron, because the nucleus already has an overabundance of neutrons) is also referred to as fission, and occurs especially in very high-mass-number isotopes. Spontaneous fission was discovered in 1940 by Flyorov, Petrzhak, and Kurchatov in Moscow, in an experiment intended to confirm that, without bombardment by neutrons, the fission rate of uranium was negligible, as predicted by Niels Bohr; it was not negligible. Despite the possibility of spontaneous fission, it does not play any role for energy production of stars. In contrast to nuclear fusion, which drives the formation of stars and their development, one can consider nuclear fission as neglectable for the evolution of the universe. Accordingly, all elements (with a few exceptions, see "spontaneous fission") which are important for the formation of solar systems, planets and also for all forms of life are not fission products, but rather the results of fusion processes. The unpredictable composition of the products (which vary in a broad probabilistic and somewhat chaotic manner) distinguishes fission from purely quantum tunneling processes such as proton emission, alpha decay, and cluster decay, which give the same products each time. Nuclear fission produces energy for nuclear power and drives the explosion of nuclear weapons. Both uses are possible because certain substances called nuclear fuels undergo fission when struck by fission neutrons, and in turn emit neutrons when they break apart. This makes a self-sustaining nuclear chain reaction possible, releasing energy at a controlled rate in a nuclear reactor or at a very rapid, uncontrolled rate in a nuclear weapon. The amount of free energy released in the fission of an equivalent amount of is a million times more than that released in the combustion of methane or from hydrogen fuel cells. The products of nuclear fission, however, are on average far more radioactive than the heavy elements which are normally fissioned as fuel, and remain so for significant amounts of time, giving rise to a nuclear waste problem. However, the seven long-lived fission products make up only a small fraction of fission products. Neutron absorption which does not lead to fission produces plutonium (from ) and minor actinides (from both and ) whose radiotoxicity is far higher than that of the long lived fission products. Concerns over nuclear waste accumulation and the destructive potential of nuclear weapons are a counterbalance to the peaceful desire to use fission as an energy source. The thorium fuel cycle produces virtually no plutonium and much less minor actinides, but - or rather its decay products - are a major gamma ray emitter. All actinides are fertile or fissile and fast breeder reactors can fission them all albeit only in certain configurations. Nuclear reprocessing aims to recover usable material from spent nuclear fuel to both enable uranium (and thorium) supplies to last longer and to reduce the amount of "waste". The industry term for a process that fissions all or nearly all actinides is a "closed fuel cycle". Physical overview Mechanism Younes and Loveland define fission as, "...a collective motion of the protons and neutrons that make up the nucleus, and as such it is distinguishable from other phenomena that break up the nucleus. Nuclear fission is an extreme example of large-amplitude collective motion that results in the division of a parent nucleus into two or more fragment nuclei. The fission process can occur spontaneously, or it can be induced by an incident particle." The energy from a fission reaction is produced by its fission products, though a large majority of it, about 85 percent, is found in fragment kinetic energy, while about 6 percent each comes from initial neutrons and gamma rays and those emitted after beta decay, plus about 3 percent from neutrinos as the product of such decay. Radioactive decay Nuclear fission can occur without neutron bombardment as a type of radioactive decay. This type of fission is called spontaneous fission, and was first observed in 1940. Nuclear reaction During induced fission, a compound system is formed after an incident particle fuses with a target. The resultant excitation energy may be sufficient to emit neutrons, or gamma-rays, and nuclear scission. Fission into two fragments is called binary fission, and is the most common nuclear reaction. Occurring least frequently is ternary fission, in which a third particle is emitted. This third particle is commonly an α particle. Since in nuclear fission, the nucleus emits more neutrons than the one it absorbs, a chain reaction is possible. Binary fission may produce any of the fission products, at 95±15 and 135±15 daltons. However, the binary process happens merely because it is the most probable. In anywhere from two to four fissions per 1000 in a nuclear reactor, ternary fission can produce three positively charged fragments (plus neutrons) and the smallest of these may range from so small a charge and mass as a proton (Z = 1), to as large a fragment as argon (Z = 18). The most common small fragments, however, are composed of 90% helium-4 nuclei with more energy than alpha particles from alpha decay (so-called "long range alphas" at ~16 megaelectronvolts (MeV)), plus helium-6 nuclei, and tritons (the nuclei of tritium). Though less common than binary fission, it still produces significant helium-4 and tritium gas buildup in the fuel rods of modern nuclear reactors. Bohr and Wheeler used their liquid drop model, the packing fraction curve of Arthur Jeffrey Dempster, and Eugene Feenberg's estimates of nucleus radius and surface tension, to estimate the mass differences of parent and daughters in fission. They then equated this mass difference to energy using Einstein's mass-energy equivalence formula. The stimulation of the nucleus after neutron bombardment was analogous to the vibrations of a liquid drop, with surface tension and the Coulomb force in opposition. Plotting the sum of these two energies as a function of elongated shape, they determined the resultant energy surface had a saddle shape. The saddle provided an energy barrier called the critical energy barrier. Energy of about 6 MeV provided by the incident neutron was necessary to overcome this barrier and cause the nucleus to fission. According to John Lilley, "The energy required to overcome the barrier to fission is called the activation energy or fission barrier and is about 6 MeV for A ≈ 240. It is found that the activation energy decreases as A increases. Eventually, a point is reached where activation energy disappears altogether...it would undergo very rapid spontaneous fission." Maria Goeppert Mayer later proposed the nuclear shell model for the nucleus. The nuclides that can sustain a fission chain reaction are suitable for use as nuclear fuels. The most common nuclear fuels are 235U (the isotope of uranium with mass number 235 and of use in nuclear reactors) and 239Pu (the isotope of plutonium with mass number 239). These fuels break apart into a bimodal range of chemical elements with atomic masses centering near 95 and 135 daltons (fission products). Most nuclear fuels undergo spontaneous fission only very slowly, decaying instead mainly via an alpha-beta decay chain over periods of millennia to eons. In a nuclear reactor or nuclear weapon, the overwhelming majority of fission events are induced by bombardment with another particle, a neutron, which is itself produced by prior fission events. Fissionable isotopes such as uranium-238 require additional energy provided by fast neutrons (such as those produced by nuclear fusion in thermonuclear weapons). While some of the neutrons released from the fission of are fast enough to induce another fission in , most are not, meaning it can never achieve criticality. While there is a very small (albeit nonzero) chance of a thermal neutron inducing fission in , neutron absorption is orders of magnitude more likely. Energetics Input Fission cross sections are a measurable property related to the probability that fission will occur in a nuclear reaction. Cross sections are a function of incident neutron energy, and those for and are a million times higher than at lower neutron energy levels. Absorption of any neutron makes available to the nucleus binding energy of about 5.3 MeV. needs a fast neutron to supply the additional 1 MeV needed to cross the critical energy barrier for fission. In the case of however, that extra energy is provided when adjusts from an odd to an even mass. In the words of Younes and Lovelace, "...the neutron absorption on a target forms a nucleus with excitation energy greater than the critical fission energy, whereas in the case of n + , the resulting nucleus has an excitation energy below the critical fission energy." About 6 MeV of the fission-input energy is supplied by the simple binding of an extra neutron to the heavy nucleus via the strong force; however, in many fissionable isotopes, this amount of energy is not enough for fission. Uranium-238, for example, has a near-zero fission cross section for neutrons of less than 1 MeV energy. If no additional energy is supplied by any other mechanism, the nucleus will not fission, but will merely absorb the neutron, as happens when absorbs slow and even some fraction of fast neutrons, to become . The remaining energy to initiate fission can be supplied by two other mechanisms: one of these is more kinetic energy of the incoming neutron, which is increasingly able to fission a fissionable heavy nucleus as it exceeds a kinetic energy of 1 MeV or more (so-called fast neutrons). Such high energy neutrons are able to fission directly (see thermonuclear weapon for application, where the fast neutrons are supplied by nuclear fusion). However, this process cannot happen to a great extent in a nuclear reactor, as too small a fraction of the fission neutrons produced by any type of fission have enough energy to efficiently fission . (For example, neutrons from thermal fission of have a mean energy of 2 MeV, a median energy of 1.6 MeV, and a mode of 0.75 MeV, and the energy spectrum for fast fission is similar.) Among the heavy actinide elements, however, those isotopes that have an odd number of neutrons (such as 235U with 143 neutrons) bind an extra neutron with an additional 1 to 2 MeV of energy over an isotope of the same element with an even number of neutrons (such as 238U with 146 neutrons). This extra binding energy is made available as a result of the mechanism of neutron pairing effects, which itself is caused by the Pauli exclusion principle, allowing an extra neutron to occupy the same nuclear orbital as the last neutron in the nucleus. In such isotopes, therefore, no neutron kinetic energy is needed, for all the necessary energy is supplied by absorption of any neutron, either of the slow or fast variety (the former are used in moderated nuclear reactors, and the latter are used in fast-neutron reactors, and in weapons). According to Younes and Loveland, "Actinides like that fission easily following the absorption of a thermal (0.25 meV) neutron are called fissile, whereas those like that do not easily fission when they absorb a thermal neutron are called fissionable." Output After an incident particle has fused with a parent nucleus, if the excitation energy is sufficient, the nucleus breaks into fragments. This is called scission, and occurs at about 10−20 seconds. The fragments can emit prompt neutrons at between 10−18 and 10−15 seconds. At about 10−11 seconds, the fragments can emit gamma rays. At 10−3 seconds β decay, β-delayed neutrons, and gamma rays are emitted from the decay products. Typical fission events release about two hundred million eV (200 MeV) of energy, the equivalent of roughly >2 trillion kelvin, for each fission event. The exact isotope which is fissioned, and whether or not it is fissionable or fissile, has only a small impact on the amount of energy released. This can be easily seen by examining the curve of binding energy (image below), and noting that the average binding energy of the actinide nuclides beginning with uranium is around 7.6 MeV per nucleon. Looking further left on the curve of binding energy, where the fission products cluster, it is easily observed that the binding energy of the fission products tends to center around 8.5 MeV per nucleon. Thus, in any fission event of an isotope in the actinide mass range, roughly 0.9 MeV are released per nucleon of the starting element. The fission of 235U by a slow neutron yields nearly identical energy to the fission of 238U by a fast neutron. This energy release profile holds for thorium and the various minor actinides as well. When a uranium nucleus fissions into two daughter nuclei fragments, about 0.1 percent of the mass of the uranium nucleus appears as the fission energy of ~200 MeV. For uranium-235 (total mean fission energy 202.79 MeV), typically ~169 MeV appears as the kinetic energy of the daughter nuclei, which fly apart at about 3% of the speed of light, due to Coulomb repulsion. Also, an average of 2.5 neutrons are emitted, with a mean kinetic energy per neutron of ~2 MeV (total of 4.8 MeV). The fission reaction also releases ~7 MeV in prompt gamma ray photons. The latter figure means that a nuclear fission explosion or criticality accident emits about 3.5% of its energy as gamma rays, less than 2.5% of its energy as fast neutrons (total of both types of radiation ~6%), and the rest as kinetic energy of fission fragments (this appears almost immediately when the fragments impact surrounding matter, as simple heat). Some processes involving neutrons are notable for absorbing or finally yielding energy — for example neutron kinetic energy does not yield heat immediately if the neutron is captured by a uranium-238 atom to breed plutonium-239, but this energy is emitted if the plutonium-239 is later fissioned. On the other hand, so-called delayed neutrons emitted as radioactive decay products with half-lives up to several minutes, from fission-daughters, are very important to reactor control, because they give a characteristic "reaction" time for the total nuclear reaction to double in size, if the reaction is run in a "delayed-critical" zone which deliberately relies on these neutrons for a supercritical chain-reaction (one in which each fission cycle yields more neutrons than it absorbs). Without their existence, the nuclear chain-reaction would be prompt critical and increase in size faster than it could be controlled by human intervention. In this case, the first experimental atomic reactors would have run away to a dangerous and messy "prompt critical reaction" before their operators could have manually shut them down (for this reason, designer Enrico Fermi included radiation-counter-triggered control rods, suspended by electromagnets, which could automatically drop into the center of Chicago Pile-1). If these delayed neutrons are captured without producing fissions, they produce heat as well. Binding energy The binding energy of the nucleus is the difference between the rest-mass energy of the nucleus and the rest-mass energy of the neutron and proton nucleons. The binding energy formula includes volume, surface and Coulomb energy terms that include empirically derived coefficients for all three, plus energy ratios of a deformed nucleus relative to a spherical form for the surface and Coulomb terms. Additional terms can be included such as symmetry, pairing, the finite range of the nuclear force, and charge distribution within the nuclei to improve the estimate. Normally binding energy is referred to and plotted as average binding energy per nucleon. According to Lilley, "The binding energy of a nucleus is the energy required to separate it into its constituent neutrons and protons." where is mass number, is atomic number, is the atomic mass of a hydrogen atom, is the mass of a neutron, and is the speed of light. Thus, the mass of an atom is less than the mass of its constituent protons and neutrons, assuming the average binding energy of its electrons is negligible. The binding energy is expressed in energy units, using Einstein's mass-energy equivalence relationship. The binding energy also provides an estimate of the total energy released from fission. The curve of binding energy is characterized by a broad maximum near mass number 60 at 8.6 MeV, then gradually decreases to 7.6 MeV at the highest mass numbers. Mass numbers higher than 238 are rare. At the lighter end of the scale, peaks are noted for helium-4, and the multiples such as beryllium-8, carbon-12, oxygen-16, neon-20 and magnesium-24. Binding energy due to the nuclear force approaches a constant value for large , while the Coulomb acts over a larger distance so that electrical potential energy per proton grows as increases. Fission energy is released when a is larger than 120 nucleus fragments. Fusion energy is released when lighter nuclei combine. Carl Friedrich von Weizsäcker's semi-empirical mass formula may be used to express the binding energy as the sum of five terms, which are the volume energy, a surface correction, Coulomb energy, a symmetry term, and a pairing term: where the nuclear binding energy is proportional to the nuclear volume, while nucleons near the surface interact with fewer nucleons, reducing the effect of the volume term. According to Lilley, "For all naturally occurring nuclei, the surface-energy term dominates and the nucleus exists in a state of equilibrium." The negative contribution of Coulomb energy arises from the repulsive electric force of the protons. The symmetry term arises from the fact that effective forces in the nucleus are stronger for unlike neutron-proton pairs, rather than like neutron–neutron or proton–proton pairs. The pairing term arises from the fact that like nucleons form spin-zero pairs in the same spatial state. The pairing is positive if and are both even, adding to the binding energy. In fission there is a preference for fission fragments with even , which is called the odd–even effect on the fragments' charge distribution. This can be seen in the empirical fragment yield data for each fission product, as products with even have higher yield values. However, no odd–even effect is observed on fragment distribution based on their . This result is attributed to nucleon pair breaking. In nuclear fission events the nuclei may break into any combination of lighter nuclei, but the most common event is not fission to equal mass nuclei of about mass 120; the most common event (depending on isotope and process) is a slightly unequal fission in which one daughter nucleus has a mass of about 90 to 100 daltons and the other the remaining 130 to 140 daltons. Stable nuclei, and unstable nuclei with very long half-lives, follow a trend of stability evident when is plotted against . For lighter nuclei less than = 20, the line has the slope = , while the heavier nuclei require additional neutrons to remain stable. Nuclei that are neutron- or proton-rich have excessive binding energy for stability, and the excess energy may convert a neutron to a proton or a proton to a neutron via the weak nuclear force, a process known as beta decay. Neutron-induced fission of U-235 emits a total energy of 207 MeV, of which about 200 MeV is recoverable, Prompt fission fragments amount to 168 MeV, which are easily stopped with a fraction of a millimeter. Prompt neutrons total 5 MeV, and this energy is recovered as heat via scattering in the reactor. However, many fission fragments are neutron-rich and decay via β− emissions. According to Lilley, "The radioactive decay energy from the fission chains is the second release of energy due to fission. It is much less than the prompt energy, but it is a significant amount and is why reactors must continue to be cooled after they have been shut down and why the waste products must be handled with great care and stored safely." Chain reactions John Lilley states, "...neutron-induced fission generates extra neutrons which can induce further fissions in the next generation and so on in a chain reaction. The chain reaction is characterized by the neutron multiplication factor k, which is defined as the ratio of the number of neutrons in one generation to the number in the preceding generation. If, in a reactor, k is less than unity, the reactor is subcritical, the number of neutrons decreases and the chain reaction dies out. If k > 1, the reactor is supercritical and the chain reaction diverges. This is the situation in a fission bomb where growth is at an explosive rate. If k is exactly unity, the reactions proceed at a steady rate and the reactor is said to be critical. It is possible to achieve criticality in a reactor using natural uranium as fuel, provided that the neutrons have been efficiently moderated to thermal energies." Moderators include light water, heavy water, and graphite. According to John C. Lee, "For all nuclear reactors in operation and those under development, the nuclear fuel cycle is based on one of three fissile materials, 235U, 233U, and 239Pu, and the associated isotopic chains. For the current generation of LWRs, the enriched U contains 2.5~4.5 wt% of 235U, which is fabricated into UO2 fuel rods and loaded into fuel assemblies." Lee states, "One important comparison for the three major fissile nuclides, 235U, 233U, and 239Pu, is their breeding potential. A breeder is by definition a reactor that produces more fissile material than it consumes and needs a minimum of two neutrons produced for each neutron absorbed in a fissile nucleus. Thus, in general, the conversion ratio (CR) is defined as the ratio of fissile material produced to that destroyed...when the CR is greater than 1.0, it is called the breeding ratio (BR)...233U offers a superior breeding potential for both thermal and fast reactors, while 239Pu offers a superior breeding potential for fast reactors." Fission reactors Critical fission reactors are the most common type of nuclear reactor. In a critical fission reactor, neutrons produced by fission of fuel atoms are used to induce yet more fissions, to sustain a controllable amount of energy release. Devices that produce engineered but non-self-sustaining fission reactions are subcritical fission reactors. Such devices use radioactive decay or particle accelerators to trigger fissions. Critical fission reactors are built for three primary purposes, which typically involve different engineering trade-offs to take advantage of either the heat or the neutrons produced by the fission chain reaction: power reactors are intended to produce heat for nuclear power, either as part of a generating station or a local power system such as a nuclear submarine. research reactors are intended to produce neutrons and/or activate radioactive sources for scientific, medical, engineering, or other research purposes. breeder reactors are intended to produce nuclear fuels in bulk from more abundant isotopes. The better known fast breeder reactor makes 239Pu (a nuclear fuel) from the naturally very abundant 238U (not a nuclear fuel). Thermal breeder reactors previously tested using 232Th to breed the fissile isotope 233U (thorium fuel cycle) continue to be studied and developed. While, in principle, all fission reactors can act in all three capacities, in practice the tasks lead to conflicting engineering goals and most reactors have been built with only one of the above tasks in mind. (There are several early counter-examples, such as the Hanford N reactor, now decommissioned). As of 2019, the 448 nuclear power plants worldwide provided a capacity of 398 GWE, with about 85% being light-water cooled reactors such as pressurized water reactors or boiling water reactors. Energy from fission is transmitted through conduction or convection to the nuclear reactor coolant, then to a heat exchanger, and the resultant generated steam is used to drive a turbine or generator. Fission bombs The objective of an atomic bomb is to produce a device, according to Serber, "...in which energy is released by a fast neutron chain reaction in one or more of the materials known to show nuclear fission." According to Rhodes, "Untamped, a bomb core even as large as twice the critical mass would completely fission less than 1 percent of its nuclear material before it expanded enough to stop the chain reaction from proceeding. Tamper always increased efficiency: it reflected neutrons back into the core and its inertia...slowed the core's expansion and helped keep the core surface from blowing away." Rearrangement of the core material's subcritical components would need to proceed as fast as possible to ensure effective detonation. Additionally, a third basic component was necessary, "...an initiator—a Ra + Be source or, better, a Po + Be source, with the radium or polonium attached perhaps to one piece of the core and the beryllium to the other, to smash together and spray neutrons when the parts mated to start the chain reaction." However, any bomb would "necessitate locating, mining and processing hundreds of tons of uranium ore...", while U-235 separation or the production of Pu-239 would require additional industrial capacity. History Discovery of nuclear fission The discovery of nuclear fission occurred in 1938 in the buildings of the Kaiser Wilhelm Society for Chemistry, today part of the Free University of Berlin, following over four decades of work on the science of radioactivity and the elaboration of new nuclear physics that described the components of atoms. In 1911, Ernest Rutherford proposed a model of the atom in which a very small, dense and positively charged nucleus of protons was surrounded by orbiting, negatively charged electrons (the Rutherford model). Niels Bohr improved upon this in 1913 by reconciling the quantum behavior of electrons (the Bohr model). In 1928, George Gamow proposed the Liquid drop model, which became essential to understanding the physics of fission. In 1896, Henri Becquerel had found, and Marie Curie named, radioactivity. In 1900, Rutherford and Frederick Soddy, investigating the radioactive gas emanating from thorium, "conveyed the tremendous and inevitable conclusion that the element thorium was slowly and spontaneously transmuting itself into argon gas!" In 1919, following up on an earlier anomaly Ernest Marsden noted in 1915, Rutherford attempted to "break up the atom." Rutherford was able to accomplish the first artificial transmutation of nitrogen into oxygen, using alpha particles directed at nitrogen 14N + α → 17O + p.  Rutherford stated, "...we must conclude that the nitrogen atom is disintegrated," while the newspapers stated he had split the atom. This was the first observation of a nuclear reaction, that is, a reaction in which particles from one decay are used to transform another atomic nucleus. It also offered a new way to study the nucleus. Rutherford and James Chadwick then used alpha particles to "disintegrate" boron, fluorine, sodium, aluminum, and phosphorus before reaching a limitation associated with the energy of his alpha particle source. Eventually, in 1932, a fully artificial nuclear reaction and nuclear transmutation was achieved by Rutherford's colleagues Ernest Walton and John Cockcroft, who used artificially accelerated protons against lithium-7, to split this nucleus into two alpha particles. The feat was popularly known as "splitting the atom", and would win them the 1951 Nobel Prize in Physics for "Transmutation of atomic nuclei by artificially accelerated atomic particles", although it was not the nuclear fission reaction later discovered in heavy elements. English physicist James Chadwick discovered the neutron in 1932. Chadwick used an ionization chamber to observe protons knocked out of several elements by beryllium radiation, following up on earlier observations made by Joliot-Curies. In Chadwick's words, "...In order to explain the great penetrating power of the radiation we must further assume that the particle has no net charge..." The existence of the neutron was first postulated by Rutherford in 1920, and in the words of Chadwick, "...how on earth were you going to build up a big nucleus with a large positive charge? And the answer was a neutral particle." Subsequently, he communicated his findings in more detail. In the words of Richard Rhodes, referring to the neutron, "It would therefore serve as a new nuclear probe of surpassing power of penetration." Philip Morrison stated, "A beam of thermal neutrons moving at about the speed of sound...produces nuclear reactions in many materials much more easily than a beam of protons...traveling thousands of times faster." According to Rhodes, "Slowing down a neutron gave it more time in the vicinity of the nucleus, and that gave it more time to be captured." Fermi's team, studying radiative capture which is the emission of gamma radiation after the nucleus captures a neutron, studied sixty elements, inducing radioactivity in forty. In the process, they discovered the ability of hydrogen to slow down the neutrons. Enrico Fermi and his colleagues in Rome studied the results of bombarding uranium with neutrons in 1934. Fermi concluded that his experiments had created new elements with 93 and 94 protons, which the group dubbed ausenium and hesperium. However, not all were convinced by Fermi's analysis of his results, though he would win the 1938 Nobel Prize in Physics for his "demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons". The German chemist Ida Noddack notably suggested in 1934 that instead of creating a new, heavier element 93, that "it is conceivable that the nucleus breaks up into several large fragments." However, the quoted objection comes some distance down, and was but one of several gaps she noted in Fermi's claim. Although Noddack was a renowned analytical chemist, she lacked the background in physics to appreciate the enormity of what she was proposing. After the Fermi publication, Otto Hahn, Lise Meitner, and Fritz Strassmann began performing similar experiments in Berlin. Meitner, an Austrian Jew, lost her Austrian citizenship with the Anschluss, the union of Austria with Germany in March 1938, but she fled in July 1938 to Sweden and started a correspondence by mail with Hahn in Berlin. By coincidence, her nephew Otto Robert Frisch, also a refugee, was also in Sweden when Meitner received a letter from Hahn dated 19 December describing his chemical proof that some of the product of the bombardment of uranium with neutrons was barium. Hahn suggested a bursting of the nucleus, but he was unsure of what the physical basis for the results were. Barium had an atomic mass 40% less than uranium, and no previously known methods of radioactive decay could account for such a large difference in the mass of the nucleus. Frisch was skeptical, but Meitner trusted Hahn's ability as a chemist. Marie Curie had been separating barium from radium for many years, and the techniques were well-known. Meitner and Frisch then correctly interpreted Hahn's results to mean that the nucleus of uranium had split roughly in half. Frisch suggested the process be named "nuclear fission", by analogy to the process of living cell division into two cells, which was then called binary fission. Just as the term nuclear "chain reaction" would later be borrowed from chemistry, so the term "fission" was borrowed from biology. News spread quickly of the new discovery, which was correctly seen as an entirely novel physical effect with great scientific—and potentially practical—possibilities. Meitner's and Frisch's interpretation of the discovery of Hahn and Strassmann crossed the Atlantic Ocean with Niels Bohr, who was to lecture at Princeton University. I.I. Rabi and Willis Lamb, two Columbia University physicists working at Princeton, heard the news and carried it back to Columbia. Rabi said he told Enrico Fermi; Fermi gave credit to Lamb. Bohr soon thereafter went from Princeton to Columbia to see Fermi. Not finding Fermi in his office, Bohr went down to the cyclotron area and found Herbert L. Anderson. Bohr grabbed him by the shoulder and said: "Young man, let me explain to you about something new and exciting in physics." It was clear to a number of scientists at Columbia that they should try to detect the energy released in the nuclear fission of uranium from neutron bombardment. On 25 January 1939, a Columbia University team conducted the first nuclear fission experiment in the United States, which was done in the basement of Pupin Hall. The experiment involved placing uranium oxide inside of an ionization chamber and irradiating it with neutrons, and measuring the energy thus released. The results confirmed that fission was occurring and hinted strongly that it was the isotope uranium 235 in particular that was fissioning. The next day, the Fifth Washington Conference on Theoretical Physics began in Washington, D.C. under the joint auspices of the George Washington University and the Carnegie Institution of Washington. There, the news on nuclear fission was spread even further, which fostered many more experimental demonstrations. The 6 January 1939 Hahn and Strassman paper announced the discover of fission. In their second publication on nuclear fission in February 1939, Hahn and Strassmann used the term Uranspaltung (uranium fission) for the first time, and predicted the existence and liberation of additional neutrons during the fission process, opening up the possibility of a nuclear chain reaction. The 11 February 1939 paper by Meitner and Frisch compared the process to the division of a liquid drop and estimated the energy released at 200 MeV. The 1 September 1939 paper by Bohr and Wheeler used this liquid drop model to quantify fission details, including the energy released, estimated the cross section for neutron-induced fission, and deduced was the major contributor to that cross section and slow-neutron fission. Fission chain reaction realized During this period the Hungarian physicist Leó Szilárd realized that the neutron-driven fission of heavy atoms could be used to create a nuclear chain reaction. Such a reaction using neutrons was an idea he had first formulated in 1933, upon reading Rutherford's disparaging remarks about generating power from neutron collisions. However, Szilárd had not been able to achieve a neutron-driven chain reaction using beryllium. Szilard stated, "...if we could find an element which is split by neutrons and which would emit two neutrons when it absorbs one neutron, such an element, if assembled in sufficiently large mass, could sustain a nuclear chain reaction." On 25 January 1939, after learning of Hahn's discovery from Eugene Wigner, Szilard noted, "...if enough neutrons are emitted...then it should be, of course, possible to sustain a chain reaction. All of the things which H. G. Wells predicted appeared suddenly real to me." After the Hahn-Strassman paper was published, Szilard noted in a letter to Lewis Strauss, that during the fission of uranium, "the energy released in this new reaction must be very much higher than all previously known cases...," which might lead to "large-scale production of energy and radioactive elements, unfortunately also perhaps to atomic bombs." Szilard now urged Fermi (in New York) and Frédéric Joliot-Curie (in Paris) to refrain from publishing on the possibility of a chain reaction, lest the Nazi government become aware of the possibilities on the eve of what would later be known as World War II. With some hesitation Fermi agreed to self-censor. But Joliot-Curie did not, and in April 1939 his team in Paris, including Hans von Halban and Lew Kowarski, reported in the journal Nature that the number of neutrons emitted with nuclear fission of uranium was then reported at 3.5 per fission. Szilard and Walter Zinn found "...the number of neutrons emitted by fission to be about two." Fermi and Anderson estimated "a yield of about two neutrons per each neutron captured." With the news of fission neutrons from uranium fission, Szilárd immediately understood the possibility of a nuclear chain reaction using uranium. In the summer, Fermi and Szilard proposed the idea of a nuclear reactor (pile) to mediate this process. The pile would use natural uranium as fuel. Fermi had shown much earlier that neutrons were far more effectively captured by atoms if they were of low energy (so-called "slow" or "thermal" neutrons), because for quantum reasons it made the atoms look like much larger targets to the neutrons. Thus to slow down the secondary neutrons released by the fissioning uranium nuclei, Fermi and Szilard proposed a graphite "moderator", against which the fast, high-energy secondary neutrons would collide, effectively slowing them down. With enough uranium, and with sufficiently pure graphite, their "pile" could theoretically sustain a slow-neutron chain reaction. This would result in the production of heat, as well as the creation of radioactive fission products. In August 1939, Szilard, Teller and Wigner thought that the Germans might make use of the fission chain reaction and were spurred to attempt to attract the attention of the United States government to the issue. Towards this, they persuaded Albert Einstein to lend his name to a letter directed to President Franklin Roosevelt. On 11 October, the Einstein–Szilárd letter was delivered via Alexander Sachs. Roosevelt quickly understood the implications, stating, "Alex, what you are after is to see that the Nazis don't blow us up." Roosevelt ordered the formation of the Advisory Committee on Uranium. In February 1940, encouraged by Fermi and John R. Dunning, Alfred O. C. Nier was able to separate U-235 and U-238 from uranium tetrachloride in a glass mass spectrometer. Subsequently, Dunning, bombarding the U-235 sample with neutrons generated by the Columbia University cyclotron, confirmed "U-235 was responsible for the slow neutron fission of uranium." At the University of Birmingham, Frisch teamed up with Peierls, who had been working on a critical mass formula. assuming isotope separation was possible, they considered 235U, which had a cross section not yet determined, but which was assumed to be much larger than that of natural uranium. They calculated only a pound or two in a volume less than a golf ball, would result in a chain reaction faster than vaporization, and the resultant explosion would generate temperature greater than the interior of the sun, and pressures greater than the center of the earth. Additionally, the costs of isotope separation "would be insignificant compared to the cost of the war." By March 1940, encouraged by Mark Oliphant, they wrote the Frisch–Peierls memorandum in two parts, "On the construction of a 'super-bomb; based on a nuclear chain reaction in uranium," and "Memorandum on the properties of a radioactive 'super-bomb.' ". On 10 April 1940, the first meeting of the MAUD Committee was held. In December 1940, Franz Simon at Oxford wrote his Estimate of the size of an actual separation plant." Simon proposed gaseous diffusion as the best method for uranium isotope separation. On 28 March 1941, Emilio Segré and Glen Seaborg reported on the "strong indications that 239Pu undergoes fission with slow neutrons." This meant chemical separation was an alternative to uranium isotope separation. Instead, a nuclear reactor fueled with ordinary uranium could produce a plutonium isotope as a nuclear explosive substitute for 235U. In May, they demonstrated the cross section of plutonium was 1.7 times that of U235. When plutonium's cross section for fast fission was measured to be ten times that of U238, plutonium became a viable option for a bomb. In October 1941, MAUD released its final report to the U.S. Government. The report stated, "We have now reached the conclusion that it will be possible to make an effective uranium bomb...The material for the first bomb could be ready by the end of 1943..." In November 1941, John Dunning and Eugene T. Booth were able to demonstrate the enrichment of uranium through gaseous barrier diffusion. On 27 November, Bush delivered to third National Academy of Sciences report to Roosevelt. The report, amongst other things, called for parallel development of all isotope-separation systems. On 6 December, Bush and Conant reorganized the Uranium Committee's tasks, with Harold Urey developing gaseous diffusion, Lawrence developing electromagnetic separation, Eger V. Murphree developing centrifuges, and Arthur Compton responsible for theoretical studies and design. On 23 April 1942, Met Lab scientists discussed seven possible ways to extract plutonium from irradiated uranium, and decided to pursue investigation of all seven. On 17 June, the first batch of uranium nitrate hexahydrate (UNH) was undergoing neutron bombardment in the Washington University in St. Louis cyclotron. On 27 July, the irradiated UNH was ready for Glenn T. Seaborg's team. On 20 August, using ultramicrochemistry techniques, they successfully extracted plutonium. In April 1939, creating a chain reaction in natural uranium became the goal of Fermi and Szilard, as opposed to isotope separation. Their first efforts involved five hundred pounds of uranium oxide from the Eldorado Radium Corporation. Packed into fifty-two cans two inches in diameter and two feet long in a tank of manganese solution, they were able to confirm more neutrons were emitted than absorbed. However, the hydrogen within the water absorbed the slow neutrons necessary for fission. Carbon in the form of graphite, was then considered, because of its smaller capture cross section. In April 1940, Fermi was able to confirm carbon's potential for a slow-neutron chain reaction, after receiving National Carbon Company's graphite bricks at their Pupin Laboratories. In August and September, the Columbia team enlarged upon the cross section measurements by making a series of exponential "piles". The first piles consisted of a uranium-graphite lattice, consisting of 288 cans, each containing 60 pounds of uranium oxide, surrounded by graphite bricks. Fermi's goal was to determine critical mass necessary to sustain neutron generation. Fermi defined the reproduction factor k for assessing the chain reaction, with a value of 1.0 denoting a sustained chain reaction. In September 1941, Fermi's team was only able to achieve a k value of 0.87. In April 1942, before the project was centralized in Chicago, they had achieved 0.918 by removing moisture from the oxide. In May 1942, Fermi planned a full-scale chain reacting pile, Chicago Pile-1, after one of the exponential piles at Stagg Field reached a k of 0.995. Between 15 September and 15 November, Herbert L. Anderson and Walter Zinn built sixteen exponential piles. Acquisition of purer forms of graphite, without traces of boron and its large cross section, became paramount. Also important was the acquisition of highly purified forms of oxide from Mallinckrodt Chemical Works. Finally, acquiring pure uranium metal from the Ames process, meant the replacement of oxide pseudospheres with Frank Spedding's "eggs". Starting on 16 November 1942, Fermi had Anderson and Zinn working in two twelve-hours shifts, constructing a pile that eventually reached 57 layers by 1 Dec. The final pile consisted of 771,000 pounds of graphite, 80,590 pounds of uranium oxide, and 12,400 pounds of uranium metal, with ten cadmium control rods. Neutron intensity was measured with a boron trifluoride counter, with the control rods removed, after the end of each shift. On 2 Dec. 1942, with k approaching 1.0, Fermi had all but one of the control rod removed, and gradually removed the last one. The neutron counter clicks increased, as did the pen recorder, when Fermi announced "The pile has gone critical." They had achieved a k of 1.0006, which meant neutron intensity doubled every two minutes, in addition to breeding plutonium. Manhattan Project and beyond In the United States, an all-out effort for making atomic weapons was begun in late 1942. This work was taken over by the U.S. Army Corps of Engineers in 1943, and known as the Manhattan Engineer District. The top-secret Manhattan Project, as it was colloquially known, was led by General Leslie R. Groves. Among the project's dozens of sites were: Hanford Site in Washington, which had the first industrial-scale nuclear reactors and produced plutonium; Oak Ridge, Tennessee, which was primarily concerned with uranium enrichment; and Los Alamos, in New Mexico, which was the scientific hub for research on bomb development and design. Other sites, notably the Berkeley Radiation Laboratory and the Metallurgical Laboratory at the University of Chicago, played important contributing roles. Overall scientific direction of the project was managed by the physicist J. Robert Oppenheimer. In July 1945, the first atomic explosive device, dubbed "The Gadget", was detonated in the New Mexico desert in the Trinity test. It was fueled by plutonium created at Hanford. In August 1945, two more atomic devices – "Little Boy", a uranium-235 bomb, and "Fat Man", a plutonium bomb – were used against the Japanese cities of Hiroshima and Nagasaki. Natural fission chain-reactors on Earth Criticality in nature is uncommon. At three ore deposits at Oklo in Gabon, sixteen sites (the so-called Oklo Fossil Reactors) have been discovered at which self-sustaining nuclear fission took place approximately 2 billion years ago. French physicist Francis Perrin discovered the Oklo Fossil Reactors in 1972, but it was postulated by Paul Kuroda in 1956. Large-scale natural uranium fission chain reactions, moderated by normal water, had occurred far in the past and would not be possible now. This ancient process was able to use normal water as a moderator only because 2 billion years before the present, natural uranium was richer in the shorter-lived fissile isotope 235U (about 3%), than natural uranium available today (which is only 0.7%, and must be enriched to 3% to be usable in light-water reactors).
Physical sciences
Nuclear physics
null
22071
https://en.wikipedia.org/wiki/Nonsteroidal%20anti-inflammatory%20drug
Nonsteroidal anti-inflammatory drug
Non-steroidal anti-inflammatory drugs (NSAID) are members of a therapeutic drug class which reduces pain, decreases inflammation, decreases fever, and prevents blood clots. Side effects depend on the specific drug, its dose and duration of use, but largely include an increased risk of gastrointestinal ulcers and bleeds, heart attack, and kidney disease. The term non-steroidal, common from around 1960, distinguishes these drugs from corticosteroids, another class of anti-inflammatory drugs, which during the 1950s had acquired a bad reputation due to overuse and side-effect problems after their introduction in 1948. NSAIDs work by inhibiting the activity of cyclooxygenase enzymes (the COX-1 and COX-2 isoenzymes). In cells, these enzymes are involved in the synthesis of key biological mediators, namely prostaglandins, which are involved in inflammation, and thromboxanes, which are involved in blood clotting. There are two general types of NSAIDs available: non-selective and COX-2 selective. Most NSAIDs are non-selective, and inhibit the activity of both COX-1 and COX-2. These NSAIDs, while reducing inflammation, also inhibit platelet aggregation and increase the risk of gastrointestinal ulcers and bleeds. COX-2 selective inhibitors have fewer gastrointestinal side effects, but promote thrombosis, and some of these agents substantially increase the risk of heart attack. As a result, certain COX-2 selective inhibitors—such as rofecoxib—are no longer used due to the high risk of undiagnosed vascular disease. These differential effects are due to the different roles and tissue localisations of each COX isoenzyme. By inhibiting physiological COX activity, NSAIDs may cause deleterious effects on kidney function, and, perhaps as a result of water and sodium retention and decreases in renal blood flow, may lead to heart problems. In addition, NSAIDs can blunt the production of erythropoietin, resulting in anaemia, since haemoglobin needs this hormone to be produced. The most prominent NSAIDs are aspirin, ibuprofen, and naproxen; all available over the counter (OTC) in most countries. Paracetamol (acetaminophen) is generally not considered an NSAID because it has only minor anti-inflammatory activity. Paracetamol treats pain mainly by blocking COX-2 and inhibiting endocannabinoid reuptake almost exclusively within the brain, and only minimally in the rest of the body. Medical uses NSAIDs are often suggested for the treatment of acute or chronic conditions where pain and inflammation are present. NSAIDs are generally used for the symptomatic relief of the following conditions: Osteoarthritis Rheumatoid arthritis Mild-to-moderate pain due to inflammation and tissue injury Low back pain Inflammatory arthropathies (e.g., ankylosing spondylitis, psoriatic arthritis, reactive arthritis) Tennis elbow Headache Migraine Acute gout Dysmenorrhea (menstrual pain) Metastatic bone pain Postoperative pain Muscle stiffness and pain due to Parkinson's disease Pyrexia (fever) Ileus Renal colic Macular edema Traumatic injury Chronic pain and cancer-related pain The effectiveness of NSAIDs for treating non-cancer chronic pain and cancer-related pain in children and adolescents is not clear. There have not been sufficient numbers of high-quality randomised controlled trials conducted. Inflammation Differences in anti-inflammatory activity between the various individual NSAIDs are small, but there is considerable variation among individual patients in therapeutic response and tolerance to these drugs. About 60% of patients will respond to any NSAID; of the others, those who do not respond to one may well respond to another. Pain relief starts soon after taking the first dose, and a full analgesic effect should normally be obtained within a week, whereas an anti-inflammatory effect may not be achieved (or may not be clinically assessable) for up to three weeks. If appropriate responses are not obtained within these times, another NSAID should be tried. Surgical pain Pain following surgery can be significant, and many people require strong pain medications such as opioids. There is some low-certainty evidence that starting NSAID painkiller medications in adults early, before surgery, may help reduce post-operative pain, and also reduce the dose or quantity of opioid medications required after surgery. Any increase risk of surgical bleeding, bleeding in the gastrointestinal system, myocardial infarctions, or injury to the kidneys has not been well studied. When used in combination with paracetamol, the analgesic effect on post-operative pain may be improved. Aspirin Aspirin, the only NSAID able to irreversibly inhibit COX-1, is also indicated for antithrombosis through inhibition of platelet aggregation. This is useful for the management of arterial thrombosis, and prevention of adverse cardiovascular events like heart attacks. Aspirin inhibits platelet aggregation by inhibiting the action of thromboxane A2. Dentistry NSAIDs are useful in the management of post-operative dental pain following invasive dental procedures such as dental extraction. When not contra-indicated, they are favoured over the use of paracetamol alone due to the anti-inflammatory effect they provide. There is weak evidence suggesting that taking pre-operative analgesia can reduce the length of post operative pain associated with placing orthodontic spacers under local anaesthetic. Alzheimer's disease Based on observational studies and randomized controlled trials, NSAID use is not effective for the treatment or prevention of Alzheimer's disease. Contraindications NSAIDs may be used with caution by people with the following conditions: Persons who are over age 50, and who have a family history of gastrointestinal (GI) problems Persons who have had previous gastrointestinal problems from NSAID use NSAIDs should usually be avoided by people with the following conditions: Peptic ulcer or stomach bleeding Uncontrolled hypertension Kidney disease People with inflammatory bowel disease Past transient ischemic attack (excluding aspirin) Past stroke (excluding aspirin) Past myocardial infarction (excluding aspirin) Coronary artery disease (excluding aspirin) Undergoing coronary artery bypass surgery Congestive heart failure (excluding low-dose aspirin) In third trimester of pregnancy Persons who have undergone gastric bypass surgery Persons who have a history of allergic or allergic-like NSAID hypersensitivity reactions, e.g. aspirin-exacerbated respiratory disease Adverse effects The widespread use of NSAIDs has meant that the adverse effects of these drugs have become increasingly common. Use of NSAIDs increases risk of a range of gastrointestinal (GI) problems, kidney disease and adverse cardiovascular events. As commonly used for post-operative pain, there is evidence of increased risk of kidney complications. Their use following gastrointestinal surgery remains controversial, given mixed evidence of increased risk of leakage from any bowel anastomosis created. An estimated 10–20% of people taking NSAIDs experience indigestion. In the 1990s, high doses of prescription NSAIDs were associated with serious upper gastrointestinal adverse events, including bleeding. NSAIDs, like all medications, may interact with other medications. For example, concurrent use of NSAIDs and quinolone antibiotics may increase the risk of quinolones' adverse central nervous system effects, including seizure. There is an argument over the benefits and risks of NSAIDs for treating chronic musculoskeletal pain. Each drug has a benefit-risk profile and balancing the risk of no treatment with the competing potential risks of various therapies should be considered. For people over the age of 65 years old, the balance between the benefits of pain-relief medications such as NSAIDS and the potential for adverse effects has not been well determined. There is some evidence suggesting that, for some people, use of NSAIDs (or other anti-inflammatories) may contribute to the initiation of chronic pain. Side effects are dose-dependent, and in many cases severe enough to pose the risk of ulcer perforation, upper gastrointestinal bleeding, and death, limiting the use of NSAID therapy. An estimated 10–20% of NSAID patient's experience dyspepsia, and NSAID-associated upper gastrointestinal adverse events are estimated to result in 103,000 hospitalizations and 16,500 deaths per year in the United States, and represent 43% of drug-related emergency visits. Many of these events are avoidable; a review of physician visits and prescriptions estimated that unnecessary prescriptions for NSAIDs were written in 42% of visits. Aspirin should not be taken by people who have salicylate intolerance or a more generalized drug intolerance to NSAIDs, and caution should be exercised in those with asthma or NSAID-precipitated bronchospasm. Owing to its effect on the stomach lining, manufacturers recommend people with peptic ulcers, mild diabetes, or gastritis seek medical advice before using aspirin. Use of aspirin during dengue fever is not recommended owing to increased bleeding tendency. People with kidney disease, hyperuricemia, or gout should not take aspirin because it inhibits the kidneys' ability to excrete uric acid, and thus may exacerbate these conditions. Combinational risk If a COX-2 inhibitor is taken, a traditional NSAID (prescription or over-the-counter) should not be taken at the same time. Rofecoxib (Vioxx) was shown to produce significantly fewer gastrointestinal adverse drug reactions (ADRs) compared with naproxen. The study, the VIGOR trial, raised the issue of the cardiovascular safety of the coxibs (COX-2 inhibitors). A statistically significant increase in the incidence of myocardial infarctions was observed in patients on rofecoxib. Further data, from the APPROVe trial, showed a statistically significant relative risk of cardiovascular events of 1.97 versus placebo—which caused a worldwide withdrawal of rofecoxib in October 2004. Use of methotrexate together with NSAIDs in rheumatoid arthritis is safe, if adequate monitoring is done. Cardiovascular NSAIDs, aside from aspirin, increase the risk of myocardial infarction and stroke. This occurs at least within a week of use. They are not recommended in those who have had a previous heart attack as they increase the risk of death or recurrent MI. Evidence indicates that naproxen may be the least harmful out of these. NSAIDs aside from (low-dose) aspirin are associated with a doubled risk of heart failure in people without a history of cardiac disease. In people with such a history, use of NSAIDs (aside from low-dose aspirin) was associated with a more than 10-fold increase in heart failure. If this link is proven causal, researchers estimate that NSAIDs would be responsible for up to 20 percent of hospital admissions for congestive heart failure. In people with heart failure, NSAIDs increase mortality risk (hazard ratio) by approximately 1.2–1.3 for naproxen and ibuprofen, 1.7 for rofecoxib and celecoxib, and 2.1 for diclofenac. On 9 July 2015, the Food and Drug Administration (FDA) toughened warnings of increased heart attack and stroke risk associated with nonsteroidal anti-inflammatory drugs (NSAIDs) other than aspirin. Possible erectile dysfunction risk A 2005 Finnish survey study found an association between long term (over three months) use of NSAIDs and erectile dysfunction. A 2011 publication in The Journal of Urology received widespread publicity. According to the study, men who used NSAIDs regularly were at significantly increased risk of erectile dysfunction. A link between NSAID use and erectile dysfunction still existed after controlling for several conditions. However, the study was observational and not controlled, with low original participation rate, potential participation bias, and other uncontrolled factors. The authors warned against drawing any conclusion regarding cause. Gastrointestinal The main adverse drug reactions (ADRs) associated with NSAID use relate to direct and indirect irritation of the gastrointestinal (GI) tract. NSAIDs cause a dual assault on the GI tract: the acidic molecules directly irritate the gastric mucosa, and inhibition of COX-1 and COX-2 reduces the levels of protective prostaglandins. Inhibition of prostaglandin synthesis in the GI tract causes increased gastric acid secretion, diminished bicarbonate secretion, diminished mucus secretion and diminished trophic effects on the epithelial mucosa. Common gastrointestinal side effects include: Nausea or vomiting Indigestion Gastric ulceration or bleeding Diarrhea Clinical NSAID ulcers are related to the systemic effects of NSAID administration. Such damage occurs irrespective of the route of administration of the NSAID (e.g., oral, rectal, or parenteral) and can occur even in people who have achlorhydria. Ulceration risk increases with therapy duration, and with higher doses. To minimize GI side effects, it is prudent to use the lowest effective dose for the shortest period of time—a practice that studies show is often not followed. Over 50% of patients who take NSAIDs have sustained some mucosal damage to their small intestine. The risk and rate of gastric adverse effects is different depending on the type of NSAID medication a person is taking. Indomethacin, ketoprofen, and piroxicam use appear to lead to the highest rate of gastric adverse effects, while ibuprofen (lower doses) and diclofenac appear to have lower rates. Certain NSAIDs, such as aspirin, have been marketed in enteric-coated formulations that manufacturers claim reduce the incidence of gastrointestinal ADRs. Similarly, some believe that rectal formulations may reduce gastrointestinal ADRs. However, consistent with the systemic mechanism of such ADRs, and in clinical practice, these formulations have not demonstrated a reduced risk of GI ulceration. Numerous "gastro-protective" drugs have been developed with the goal of preventing gastrointestinal toxicity in people who need to take NSAIDs on a regular basis. Gastric adverse effects may be reduced by taking medications that suppress acid production such as proton pump inhibitors (e.g.: omeprazole and esomeprazole), or by treatment with a drug that mimics prostaglandin in order to restore the lining of the GI tract (e.g.: a prostaglandin analog misoprostol). Diarrhea is a common side effect of misoprostol; however, higher doses of misoprostol have been shown to reduce the risk of a person having a complication related to a gastric ulcer while taking NSAIDs. While these techniques may be effective, they are expensive for maintenance therapy. Hydrogen sulfide NSAID hybrids prevent the gastric ulceration/bleeding associated with taking the NSAIDs alone. Hydrogen sulfide is known to have a protective effect on the cardiovascular and gastrointestinal system. Inflammatory bowel disease NSAIDs should be used with caution in individuals with inflammatory bowel disease (e.g., Crohn's disease or ulcerative colitis) due to their tendency to cause gastric bleeding and form ulceration in the gastric lining. Renal NSAIDs are also associated with a fairly high incidence of adverse drug reactions (ADRs) on the kidney and over time can lead to chronic kidney disease. The mechanism of these kidney ADRs is due to changes in kidney blood flow. Prostaglandins normally dilate the afferent arterioles of the glomeruli. This helps maintain normal glomerular perfusion and glomerular filtration rate (GFR), an indicator of kidney function. This is particularly important in kidney failure where the kidney is trying to maintain renal perfusion pressure by elevated angiotensin II levels. At these elevated levels, angiotensin II also constricts the afferent arteriole into the glomerulus in addition to the efferent arteriole it normally constricts. Since NSAIDs block this prostaglandin-mediated effect of afferent arteriole dilation, particularly in kidney failure, NSAIDs cause unopposed constriction of the afferent arteriole and decreased RPF (renal perfusion flow) and GFR. Common ADRs associated with altered kidney function include: Sodium and fluid retention Hypertension (high blood pressure) These agents may also cause kidney impairment, especially in combination with other nephrotoxic agents. Kidney failure is especially a risk if the patient is also concomitantly taking an ACE inhibitor (which removes angiotensin II's vasoconstriction of the efferent arteriole) and a diuretic (which drops plasma volume, and thereby RPF)—the so-called "triple whammy" effect. In rarer instances NSAIDs may also cause more severe kidney conditions: Interstitial nephritis Nephrotic syndrome Acute kidney injury Acute tubular necrosis Renal papillary necrosis NSAIDs in combination with excessive use of phenacetin or paracetamol (acetaminophen) may lead to analgesic nephropathy. Photosensitivity Photosensitivity is a commonly overlooked adverse effect of many of the NSAIDs. The 2-arylpropionic acids are the most likely to produce photosensitivity reactions, but other NSAIDs have also been implicated including piroxicam, diclofenac, and benzydamine. Benoxaprofen, since withdrawn due to its liver toxicity, was the most photoactive NSAID observed. The mechanism of photosensitivity, responsible for the high photoactivity of the 2-arylpropionic acids, is the ready decarboxylation of the carboxylic acid moiety. The specific absorbance characteristics of the different chromophoric 2-aryl substituents, affects the decarboxylation mechanism. During pregnancy While NSAIDs as a class are not direct teratogens, use of NSAIDs in late pregnancy can cause premature closure of the fetal ductus arteriosus and kidney ADRs in the fetus. Thus, NSAIDs are not recommended during the third trimester of pregnancy because of the increased risk of premature constriction of the ductus arteriosus. Additionally, they are linked with premature birth and miscarriage. Aspirin, however, is used together with heparin in pregnant women with antiphospholipid syndrome. Additionally, indomethacin can be used in pregnancy to treat polyhydramnios by reducing fetal urine production via inhibiting fetal renal blood flow. In contrast, paracetamol (acetaminophen) is regarded as being safe and well tolerated during pregnancy, but Leffers et al. released a study in 2010, indicating that there may be associated male infertility in the unborn. Doses should be taken as prescribed, due to risk of liver toxicity with overdoses. In France, the country's health agency contraindicates the use of NSAIDs, including aspirin, after the sixth month of pregnancy. In October 2020, the U.S. Food and Drug Administration (FDA) required the drug label to be updated for all nonsteroidal anti-inflammatory medications, to describe the risk of kidney problems in unborn babies which can then lead to low amniotic fluid levels, as a result of the use of NSAIDs. They are recommending avoiding the use of NSAIDs by pregnant women at 20 weeks or later in pregnancy. Allergy and allergy-like hypersensitivity reactions A variety of allergic or allergic-like NSAID hypersensitivity reactions follow the ingestion of NSAIDs. These hypersensitivity reactions differ from the other adverse reactions listed here which are toxicity reactions, i.e. unwanted reactions that result from the pharmacological action of a drug, are dose-related, and can occur in any treated individual; hypersensitivity reactions are idiosyncratic reactions to a drug. Some NSAID hypersensitivity reactions are truly allergic in origin: 1) repetitive IgE-mediated urticarial skin eruptions, angioedema, and anaphylaxis following immediately to hours after ingesting one structural type of NSAID but not after ingesting structurally unrelated NSAIDs; and 2) Comparatively mild to moderately severe T cell-mediated delayed onset (usually more than 24 hour), skin reactions such as maculopapular rash, fixed drug eruptions, photosensitivity reactions, delayed urticaria, and contact dermatitis; or 3) far more severe and potentially life-threatening t-cell-mediated delayed systemic reactions such as the DRESS syndrome, acute generalized exanthematous pustulosis, the Stevens–Johnson syndrome, and toxic epidermal necrolysis. Other NSAID hypersensitivity reactions are allergy-like symptoms but do not involve true allergic mechanisms; rather, they appear due to the ability of NSAIDs to alter the metabolism of arachidonic acid in favor of forming metabolites that promote allergic symptoms. Affected individuals may be abnormally sensitive to these provocative metabolites or overproduce them and typically are susceptible to a wide range of structurally dissimilar NSAIDs, particularly those that inhibit COX1. Symptoms, which develop immediately to hours after ingesting any of various NSAIDs that inhibit COX-1, are: 1) exacerbations of asthmatic and rhinitis (see aspirin-exacerbated respiratory disease) symptoms in individuals with a history of asthma or rhinitis and 2) exacerbation or first-time development of wheals or angioedema in individuals with or without a history of chronic urticarial lesions or angioedema. Possible effects on bone and soft tissue healing It has been hypothesized that NSAIDs may delay healing from bone and soft-tissue injuries by inhibiting inflammation. On the other hand, it has also been hypothesized that NSAIDs might speed recovery from soft tissue injuries by preventing inflammatory processes from damaging adjacent, non-injured muscles. There is moderate evidence that they delay bone healing. Their overall effect on soft-tissue healing is unclear. Ototoxicity Long-term use of NSAID analgesics and paracetamol is associated with an increased risk of hearing loss. Other The use of NSAIDs for analgesia following gastrointestinal surgery remains controversial, given mixed evidence of an increased risk of leakage from any bowel anastomosis created. This risk may vary according to the class of NSAID prescribed. Common adverse drug reactions (ADR), other than listed above, include: raised liver enzymes, headache, dizziness. Uncommon ADRs include an abnormally high level of potassium in the blood, confusion, spasm of the airways, and rash. Ibuprofen may also rarely cause irritable bowel syndrome symptoms. NSAIDs are also implicated in some cases of Stevens–Johnson syndrome. Most NSAIDs penetrate poorly into the central nervous system (CNS). However, the COX enzymes are expressed constitutively in some areas of the CNS, meaning that even limited penetration may cause adverse effects such as somnolence and dizziness. NSAIDs may increase the risk of bleeding in patients with Dengue fever For this reason, NSAIDs are only available with a prescription in India. In very rare cases, ibuprofen can cause aseptic meningitis. As with other drugs, allergies to NSAIDs might exist. While many allergies are specific to one NSAID, up to 1 in 5 people may have unpredictable cross-reactive allergic responses to other NSAIDs as well. Immune response Although small doses generally have little to no effect on the immune system, large doses of NSAIDs significantly suppress the production of immune cells. As NSAIDs affect prostaglandins, they affect the production of most fast growing cells. This includes immune cells. Unlike corticosteroids, they do not directly suppress the immune system and so their effect on the immune system is not immediately obvious. They suppress the production of new immune cells, but leave existing immune cells functional. Large doses slowly reduce the immune response as the immune cells are renewed at a much lower rate. Causing a gradual reduction of the immune system, much slower and less noticeable than the immediate effect of Corticosteroids. The effect significantly increases with dosage, in a nearly exponential rate. Doubling of dose reduced cells by nearly four times. Increasing dose by five times reduced cell counts to only a few percent of normal levels. This is likely why the effect was not immediately obvious in low dose trials, as the effect is not apparent until much higher dosages are tested. Interactions NSAIDs reduce kidney blood flow and thereby decrease the efficacy of diuretics, and inhibit the elimination of lithium and methotrexate. NSAIDs cause decreased ability to form blood clots, which can increase the risk of bleeding when combined with other drugs that also decrease blood clotting, such as warfarin. NSAIDs may aggravate hypertension (high blood pressure) and thereby antagonize the effect of antihypertensives, such as ACE inhibitors. NSAIDs may interfere and reduce effectiveness of SSRI antidepressants through inhibiting TNFα and IFNγ, both of which are cytokine derivatives. NSAIDs, when used in combination with SSRIs, increase the risk of adverse gastrointestinal effects. NSAIDs, when used in combination with SSRIs, increase the risk of internal bleeding and brain hemorrhages. Various widely used NSAIDs enhance endocannabinoid signaling by blocking the anandamide-degrading membrane enzyme fatty acid amide hydrolase (FAAH). NSAIDs may reduce the effectiveness of antibiotics. An in-vitro study on cultured bacteria found that adding NSAIDs to antibiotics reduced their effectiveness by around 20%. The concomitant use of NSAIDs with alcohol and/or tobacco products significantly increases the already elevated risk of peptic ulcers during NSAID therapy. Mechanism of action Most NSAIDs act as nonselective inhibitors of the cyclooxygenase (COX) enzymes, inhibiting both the cyclooxygenase-1 (COX-1) and cyclooxygenase-2 (COX-2) isoenzymes. This inhibition is competitively reversible (albeit at varying degrees of reversibility), as opposed to the mechanism of aspirin, which is irreversible inhibition. COX catalyzes the formation of prostaglandins and thromboxane from arachidonic acid (itself derived from the cellular phospholipid bilayer by phospholipase A2). Prostaglandins act (among other things) as messenger molecules in the process of inflammation. This mechanism of action was elucidated in 1970 by John Vane (1927–2004), who received a Nobel Prize for his work (see Mechanism of action of aspirin). COX-1 is a constitutively expressed enzyme with a "house-keeping" role in regulating many normal physiological processes. One of these is in the stomach lining, where prostaglandins serve a protective role, preventing the stomach mucosa from being eroded by its own acid. COX-2 is an enzyme facultatively expressed in inflammation, and it is inhibition of COX-2 that produces the desirable effects of NSAIDs. When nonselective COX-1/COX-2 inhibitors (such as aspirin, ibuprofen, and naproxen) lower stomach prostaglandin levels, ulcers of the stomach or duodenum and internal bleeding can result. The discovery of COX-2 led to research to the development of selective COX-2 inhibiting drugs that do not cause gastric problems characteristic of older NSAIDs. NSAIDs have been studied in various assays to understand how they affect each of these enzymes. While the assays reveal differences, unfortunately, different assays provide differing ratios. Paracetamol (acetaminophen) is not considered an NSAID because it has little anti-inflammatory activity. It treats pain mainly by blocking COX-2 mostly in the central nervous system, but not much in the rest of the body. However, many aspects of the mechanism of action of NSAIDs remain unexplained, and for this reason, further COX pathways are hypothesized. The COX-3 pathway was believed to fill some of this gap but recent findings make it appear unlikely that it plays any significant role in humans and alternative explanation models are proposed. NSAIDs interact with the endocannabinoid system and its endocannabinoids, as COX2 have been shown to utilize endocannabinoids as substrates, and may have a key role in both the therapeutic effects and adverse effects of NSAIDs, as well as in NSAID-induced placebo responses. NSAIDs are also used in the acute pain caused by gout because they inhibit urate crystal phagocytosis besides inhibition of prostaglandin synthase. Antipyretic activity NSAIDs have antipyretic activity and can be used to treat fever. Fever is caused by elevated levels of prostaglandin E2 (PGE2), which alters the firing rate of neurons within the hypothalamus that control thermoregulation. Antipyretics work by inhibiting the enzyme COX, which causes the general inhibition of prostanoid biosynthesis (PGE2) within the hypothalamus. PGE2 signals to the hypothalamus to increase the body's thermal setpoint. Ibuprofen has been shown more effective as an antipyretic than paracetamol (acetaminophen). Arachidonic acid is the precursor substrate for cyclooxygenase leading to the production of prostaglandins F, D, and E. Classification NSAIDs can be classified based on their chemical structure or mechanism of action. Older NSAIDs were known long before their mechanism of action was elucidated and were for this reason classified by chemical structure or origin. Newer substances are more often classified by mechanism of action. Salicylates Aspirin (acetylsalicylic acid) Diflunisal (Dolobid) Salicylic acid and its salts Salsalate (Disalcid) Choline salicylate Methyl salicylate Sodium salicylate Propionic acid derivatives Ibuprofen Dexibuprofen Naproxen Fenoprofen Ketoprofen Dexketoprofen Flurbiprofen Oxaprozin Loxoprofen Pelubiprofen Zaltoprofen Fenbufen Tiaprofenic acid Carprofen Acetic acid derivatives Indomethacin Acemetacin Tolmetin Sulindac Etodolac Ketorolac Diclofenac Fenclofenac Aceclofenac Bromfenac Fentiazac Nabumetone (drug itself is non-acidic but the active, principal metabolite has a carboxylic acid group) Enolic acid (oxicam) derivatives Piroxicam Ampiroxicam Meloxicam Tenoxicam Droxicam Lornoxicam Isoxicam (withdrawn from market 1985) Phenylbutazone (Bute) Anthranilic acid derivatives (Fenamates) The following NSAIDs are derived from fenamic acid, which is a derivative of anthranilic acid, which in turn is a nitrogen isostere of salicylic acid, which is the active metabolite of aspirin. Mefenamic acid Meclofenamic acid Flufenamic acid Tolfenamic acid Etofenamate Selective COX-2 inhibitors (Coxibs) Celecoxib (FDA alert) Rofecoxib (withdrawn from market) Valdecoxib (withdrawn from market) Parecoxib FDA withdrawn, licensed in the EU Lumiracoxib TGA cancelled registration Etoricoxib not FDA approved, licensed in the EU Firocoxib used in dogs and horses Deracoxib labeled for use in dogs Robenacoxib labeled for use in dogs and cats Sulfonanilides Nimesulide (systemic preparations are banned by several countries for the potential risk of hepatotoxicity) Others Clonixin Licofelone acts by inhibiting LOX (lipooxygenase) and COX and hence known as 5-LOX/COX inhibitor H-harpagide in figwort or devil's claw Some NSAIDs are also given Intravenously such as Ketorolac and Diclofenac sodium. Chirality Most NSAIDs are chiral molecules; diclofenac and the oxicams are exceptions. However, the majority are prepared as racemic mixtures. Typically, only a single enantiomer is pharmacologically active. For some drugs (typically profens), an isomerase enzyme in vivo converts the inactive enantiomer into the active form, although its activity varies widely in individuals. This phenomenon is likely responsible for the poor correlation between NSAID efficacy and plasma concentration observed in older studies when specific analysis of the active enantiomer was not performed. Ibuprofen and ketoprofen are now available in single-enantiomer preparations (dexibuprofen and dexketoprofen), which purport to offer quicker onset and an improved side-effect profile. Naproxen has always been marketed as the single active enantiomer. Main practical differences NSAIDs within a group tend to have similar characteristics and tolerability. There is little difference in clinical efficacy among the NSAIDs when used at equivalent doses. Rather, differences among compounds usually relate to dosing regimens (related to the compound's elimination half-life), route of administration, and tolerability profile. Regarding adverse effects, selective COX-2 inhibitors have lower risk of gastrointestinal bleeding. With the exception of naproxen, nonselective NSAIDs increase the risk of having a heart attack. Some data also supports that the partially selective nabumetone is less likely to cause gastrointestinal events. A consumer report noted that ibuprofen, naproxen, and salsalate are less expensive than other NSAIDs, and essentially as effective and safe when used appropriately to treat osteoarthritis and pain. Pharmacokinetics Most nonsteroidal anti-inflammatory drugs are weak acids, with a pKa of 3–5. They are absorbed well from the stomach and intestinal mucosa. They are highly protein-bound in plasma (typically >95%), usually to albumin, so that their volume of distribution typically approximates to plasma volume. Most NSAIDs are metabolized in the liver by oxidation and conjugation to inactive metabolites that typically are excreted in the urine, though some drugs are partially excreted in bile. Metabolism may be abnormal in certain disease states, and accumulation may occur even with normal dosage. NSAIDs can also be divided into short-acting (plasma half-life less than 6 h) such as aspirin, diclofenac and ibuprofen and long-acting (half-life approximately greater than 10 h) such as naproxen, celecoxib. History It is widely believed that naturally occurring salicin in willow trees and other plants was used by the ancients as a form of analgesic or anti-inflammatory drug, but this story, although compelling, is not entirely true. Hippocrates does not mention willow at all. Dioscorides's De materia medica was arguably the most influential herbal from Roman to Medieval times but, if he mentions willow at all (there is doubt about the identity of 'Itea'), then he used the ashes, steeped in vinegar, as a treatment for corns, which corresponds well with modern uses of salicylic acid. Willow bark (from trees of the Salix genus) was widely known to be used as a medicine by multiple First Nations communities. The bark would be chewed or steeped in water for its pain relieving and antipyretic effects. The effects are a result of the bark's salicin content. Meadowsweet, another plant to contain salicin, has strong roots in British folk medicine for the same maladies. Willow bark was first reported in Western science by Edward Stone in 1763 as a treatment for ague (fever) according to the pseudoscientific doctrine of signatures. In the body, salicin is turned into salicylic acid, which produces the antipyretic and analgesic effects that the plants are known for. Salicin was first isolated by Johann Andreas Buchner in 1827. By 1829, French chemist Henri Leroux had improved the extraction process to obtain about 30g of purified salicin from 1.5kg of willow bark. By hydrolysis, salicin releases glucose and salicyl alcohol which can be converted into salicylic acid, both in vivo and through chemical methods. In 1869, Hermann Kolbe synthesised salicylic acid, although it was too acidic for the gastric mucosa. The reaction used to synthesise aromatic acid from a phenol in the presence of is known as the Kolbe-Schmitt reaction. By 1897, the German chemist Felix Hoffmann and the Bayer company prompted a new age of pharmacology by converting salicylic acid into acetylsalicylic acid—named aspirin by Heinrich Dreser. Other NSAIDs like ibuprofen were developed from the 1950s forward. In 2001, NSAIDs accounted for 70,000,000 prescriptions and 30billion over-the-counter doses sold annually in the United States. Veterinary use Research supports the use of NSAIDs for the control of pain associated with veterinary procedures such as dehorning and castration of calves. The best effect is obtained by combining a short-term local anesthetic such as lidocaine with an NSAID acting as a longer term analgesic. However, as different species have varying reactions to different medications in the NSAID family, little of the existing research data can be extrapolated to animal species other than those specifically studied, and the relevant government agency in one area sometimes prohibits uses approved in other jurisdictions. In the United States, meloxicam is approved for use only in canines, whereas (due to concerns about liver damage) it carries warnings against its use in cats except for one-time use during surgery. In spite of these warnings, meloxicam is frequently prescribed "off-label" for non-canine animals including cats and livestock species. In other countries, for example The European Union (EU), there is a label claim for use in cats.
Biology and health sciences
Pain treatments
Health
22102
https://en.wikipedia.org/wiki/Naval%20mine
Naval mine
A naval mine is a self-contained explosive weapon placed in water to damage or destroy surface ships or submarines. Similar to anti-personnel and other land mines, and unlike purpose launched naval depth charges, they are deposited and left to wait until, depending on their fuzing, they are triggered by the approach of or contact with any vessel. Naval mines can be used offensively, to hamper enemy shipping movements or lock vessels into a harbour; or defensively, to create "safe" zones protecting friendly sea lanes, harbours, and naval assets. Mines allow the minelaying force commander to concentrate warships or defensive assets in mine-free areas giving the adversary three choices: undertake a resource-intensive and time-consuming minesweeping effort, accept the casualties of challenging the minefield, or use the unmined waters where the greatest concentration of enemy firepower will be encountered. Although international law requires signatory nations to declare mined areas, precise locations remain secret, and non-complying parties might not disclose minelaying. While mines threaten only those who choose to traverse waters that may be mined, the possibility of activating a mine is a powerful disincentive to shipping. In the absence of effective measures to limit each mine's lifespan, the hazard to shipping can remain long after the war in which the mines were laid is over. Unless detonated by a parallel time fuze at the end of their useful life, naval mines need to be found and dismantled after the end of hostilities; an often prolonged, costly, and hazardous task. Modern mines containing high explosives detonated by complex electronic fuze mechanisms are much more effective than early gunpowder mines requiring physical ignition. Mines may be placed by aircraft, ships, submarines, or individual swimmers and boatmen. Minesweeping is the practice of the removal of explosive naval mines, usually by a specially designed ship called a minesweeper using various measures to either capture or detonate the mines, but sometimes also with an aircraft made for that purpose. There are also mines that release a homing torpedo rather than explode themselves. Description Mines can be laid in many ways: by purpose-built minelayers, refitted ships, submarines, or aircraft—and even by dropping them into a harbour by hand. They can be inexpensive: some variants can cost as little as US $2,000, though more sophisticated mines can cost millions of dollars, be equipped with several kinds of sensors, and deliver a warhead by rocket or torpedo. Their flexibility and cost-effectiveness make mines attractive to the less powerful belligerent in asymmetric warfare. The cost of producing and laying a mine is usually between 0.5% and 10% of the cost of removing it, and it can take up to 200 times as long to clear a minefield as to lay it. Parts of some World War II naval minefields still exist because they are too extensive and expensive to clear. Some 1940s-era mines may remain dangerous for many years. Mines have been employed as offensive or defensive weapons in rivers, lakes, estuaries, seas, and oceans, but they can also be used as tools of psychological warfare. Offensive mines are placed in enemy waters, outside harbours, and across important shipping routes to sink both merchant and military vessels. Defensive minefields safeguard key stretches of coast from enemy ships and submarines, forcing them into more easily defended areas, or keeping them away from sensitive ones. Shipowners are reluctant to send their ships through known minefields. Port authorities may attempt to clear a mined area, but those without effective minesweeping equipment may cease using the area. Transit of a mined area will be attempted only when strategic interests outweigh potential losses. The decision-makers' perception of the minefield is a critical factor. Minefields designed for psychological effect are usually placed on trade routes to stop ships from reaching an enemy nation. They are often spread thinly, to create an impression of minefields existing across large areas. A single mine inserted strategically on a shipping route can stop maritime movements for days while the entire area is swept. A mine's capability to sink ships makes it a credible threat, but minefields work more on the mind than on ships. International law, specifically the Eighth Hague Convention of 1907, requires nations to declare when they mine an area, to make it easier for civil shipping to avoid the mines. The warnings do not have to be specific; for example, during World War II, Britain declared simply that it had mined the English Channel, North Sea and French coast. History Early use Naval mines were first invented by Chinese innovators of Imperial China and were described in thorough detail by the early Ming dynasty artillery officer Jiao Yu, in his 14th-century military treatise known as the Huolongjing. Chinese records tell of naval explosives in the 16th century, used to fight against Japanese pirates (wokou). This kind of naval mine was loaded in a wooden box, sealed with putty. General Qi Jiguang made several timed, drifting explosives, to harass Japanese pirate ships. The Tiangong Kaiwu (The Exploitation of the Works of Nature) treatise, written by Song Yingxing in 1637, describes naval mines with a ripcord pulled by hidden ambushers located on the nearby shore who rotated a steel wheel flint mechanism to produce sparks and ignite the fuze of the naval mine. Although this is the rotating steel wheel's first use in naval mines, Jiao Yu described their use for land mines in the 14th century. The first plan for a sea mine in the West was by Ralph Rabbards, who presented his design to Queen Elizabeth I of England in 1574. The Dutch inventor Cornelius Drebbel was employed in the Office of Ordnance by King Charles I of England to make weapons, including the failed "floating petard". Weapons of this type were apparently tried by the English at the Siege of La Rochelle in 1627. American David Bushnell developed the first American naval mine, for use against the British in the American War of Independence. It was a watertight keg filled with gunpowder that was floated toward the enemy, detonated by a sparking mechanism if it struck a ship. It was used on the Delaware River as a drift mine, destroying a small boat near its intended target, a British warship. The 19th century The 1804 Raid on Boulogne made extensive use of explosive devices designed by inventor Robert Fulton. The 'torpedo-catamaran' was a coffer-like device balanced on two wooden floats and steered by a man with a paddle. Weighted with lead so as to ride low in the water, the operator was further disguised by wearing dark clothes and a black cap. His task was to approach the French ship, hook the torpedo to the anchor cable and, having activated the device by removing a pin, remove the paddles and escape before the torpedo detonated. Also to be deployed were large numbers of casks filled with gunpowder, ballast and combustible balls. They would float in on the tide and on washing up against an enemy's hull, explode. Also included in the force were several fireships, carrying 40 barrels of gunpowder and rigged to explode by a clockwork mechanism. In 1812, Russian engineer Pavel Shilling exploded an underwater mine using an electrical circuit. In 1842 Samuel Colt used an electric detonator to destroy a moving vessel to demonstrate an underwater mine of his own design to the United States Navy and President John Tyler. However, opposition from former president John Quincy Adams, scuttled the project as "not fair and honest warfare". In 1854, during the unsuccessful attempt of the Anglo-French (101 warships) fleet to seize the Kronstadt fortress, British steamships (9 June 1855, the first successful mining in Western history), and HMS Firefly suffered damage due to the underwater explosions of Russian naval mines. Russian naval specialists set more than 1,500 naval mines, or infernal machines, designed by Moritz von Jacobi and by Immanuel Nobel, in the Gulf of Finland during the Crimean War of 1853–1856. The mining of Vulcan led to the world's first minesweeping operation. During the next 72 hours, 33 mines were swept. The Jacobi mine was designed by German-born, Russian engineer Jacobi, in 1853. The mine was tied to the sea bottom by an anchor. A cable connected it to a galvanic cell which powered it from the shore, the power of its explosive charge was equal to of black powder. In the summer of 1853, the production of the mine was approved by the Committee for Mines of the Ministry of War of the Russian Empire. In 1854, 60 Jacobi mines were laid in the vicinity of the Forts Pavel and Alexander (Kronstadt), to deter the British Baltic Fleet from attacking them. It gradually phased out its direct competitor the Nobel mine on the insistence of Admiral Fyodor Litke. The Nobel mines were bought from Swedish industrialist Immanuel Nobel who had entered into collusion with the Russian head of navy Alexander Sergeyevich Menshikov. Despite their high cost (100 Russian rubles) the Nobel mines proved to be faulty, exploding while being laid, failing to explode or detaching from their wires, and drifting uncontrollably, at least 70 of them were subsequently disarmed by the British. In 1855, 301 more Jacobi mines were laid around Krostadt and Lisy Nos. British ships did not dare to approach them. In the 19th century, mines were called torpedoes, a name probably conferred by Robert Fulton after the torpedo fish, which gives powerful electric shocks. A spar torpedo was a mine attached to a long pole and detonated when the ship carrying it rammed another one and withdrew a safe distance. The submarine used one to sink on 17 February 1864. A Harvey torpedo was a type of floating mine towed alongside a ship and was briefly in service in the Royal Navy in the 1870s. Other "torpedoes" were attached to ships or propelled themselves. One such weapon called the Whitehead torpedo after its inventor, caused the word "torpedo" to apply to self-propelled underwater missiles as well as to static devices. These mobile devices were also known as "fish torpedoes". The American Civil War of 1861–1865 also saw the successful use of mines. The first ship sunk by a mine, , foundered in 1862 in the Yazoo River. Rear Admiral David Farragut's famous command during the Battle of Mobile Bay in 1864, "Damn the torpedoes, full speed ahead!" refers to a minefield laid at Mobile, Alabama. After 1865 the United States adopted the mine as its primary weapon for coastal defense. In the decade following 1868, Major Henry Larcom Abbot carried out a lengthy set of experiments to design and test moored mines that could be exploded on contact or be detonated at will as enemy shipping passed near them. This initial development of mines in the United States took place under the purview of the U.S. Army Corps of Engineers, which trained officers and men in their use at the Engineer School of Application at Willets Point, New York (later named Fort Totten). In 1901 underwater minefields became the responsibility of the US Army's Artillery Corps, and in 1907 this was a founding responsibility of the United States Army Coast Artillery Corps. The Imperial Russian Navy, a pioneer in mine warfare, successfully deployed mines against the Ottoman Navy during both the Crimean War and the Russo-Turkish War (1877-1878). During the War of the Pacific (1879-1883), the Peruvian Navy, at a time when the Chilean squadron was blockading the Peruvian ports, formed a brigade of torpedo boats under the command of the frigate captain Leopoldo Sánchez Calderón and the Peruvian engineer Manuel Cuadros, who perfected the naval torpedo or mine system to be electrically activated when the cargo weight was lifted. This is how, on 3 July 1880, in front of the port of Callao, the gunned transport Loa flies when capturing a sloop mined by the Peruvians. A similar fate occurred with the gunboat schooner Covadonga in front of the port of Chancay, on 13 September 1880, which having captured and checked a beautiful boat, it exploded when hoisting it on its side. During the Battle of Tamsui (1884), in the Keelung Campaign of the Sino-French War, Chinese forces in Taiwan under Liu Mingchuan took measures to reinforce Tamsui against the French; they planted nine torpedo mines in the river and blocked the entrance. Early 20th century During the Boxer Rebellion, Imperial Chinese forces deployed a command-detonated mine field at the mouth of the Hai River before the Dagu forts, to prevent the western Allied forces from sending ships to attack. The next major use of mines was during the Russo-Japanese War of 1904–1905. Two mines blew up when the struck them near Port Arthur, sending the holed vessel to the bottom and killing the fleet commander, Admiral Stepan Makarov, and most of his crew in the process. The toll inflicted by mines was not confined to the Russians, however. The Japanese Navy lost two battleships, four cruisers, two destroyers and a torpedo-boat to offensively laid mines during the war. Most famously, on 15 May 1904, the Russian minelayer Amur planted a 50-mine minefield off Port Arthur and succeeded in sinking the Japanese battleships and . Following the end of the Russo-Japanese War, several nations attempted to have mines banned as weapons of war at the Hague Peace Conference (1907). Many early mines were fragile and dangerous to handle, as they contained glass containers filled with nitroglycerin or mechanical devices that activated a blast upon tipping. Several mine-laying ships were destroyed when their cargo exploded. Beginning around the start of the 20th century, submarine mines played a major role in the defense of U.S. harbours against enemy attacks as part of the Endicott and Taft Programs. The mines employed were controlled mines, anchored to the bottoms of the harbours, and detonated under control from large mine casemates onshore. During World War I, mines were used extensively to defend coasts, coastal shipping, ports and naval bases around the globe. The Germans laid mines in shipping lanes to sink merchant and naval vessels serving Britain. The Allies targeted the German U-boats in the Strait of Dover and the Hebrides. In an attempt to seal up the northern exits of the North Sea, the Allies developed the North Sea Mine Barrage. During a period of five months from June 1918, almost 70,000 mines were laid spanning the North Sea's northern exits. The total number of mines laid in the North Sea, the British East Coast, Straits of Dover, and Heligoland Bight is estimated at 190,000 and the total number during the whole of WWI was 235,000 sea mines. Clearing the barrage after the war took 82 ships and five months, working around the clock. It was also during World War I, that the British hospital ship, , became the largest vessel ever sunk by a naval mine. The Britannic was the sister ship of the RMS Titanic, and the . World War II During World War II, the U-boat fleet, which dominated much of the battle of the Atlantic, was small at the beginning of the war and much of the early action by German forces involved mining convoy routes and ports around Britain. German submarines also operated in the Mediterranean Sea, in the Caribbean Sea, and along the U.S. coast. Initially, contact mines (requiring a ship to physically strike a mine to detonate it) were employed, usually tethered at the end of a cable just below the surface of the water. Contact mines usually blew a hole in ships' hulls. By the beginning of World War II, most nations had developed mines that could be dropped from aircraft, some of which floated on the surface, making it possible to lay them in enemy harbours. The use of dredging and nets was effective against this type of mine, but this consumed valuable time and resources and required harbours to be closed. Later, some ships survived mine blasts, limping into port with buckled plates and broken backs. This appeared to be due to a new type of mine, detecting ships by their proximity to the mine (an influence mine) and detonating at a distance, causing damage with the shock wave of the explosion. Ships that had successfully run the gantlet of the Atlantic crossing were sometimes destroyed entering freshly cleared British harbours. More shipping was being lost than could be replaced, and Churchill ordered the intact recovery of one of these new mines to be of the highest priority. The British experienced a stroke of luck in November 1939, when a German mine was dropped from an aircraft onto the mudflats off Shoeburyness during low tide. Additionally, the land belonged to the army and a base with men and workshops was at hand. Experts were dispatched from to investigate the mine. The Royal Navy knew that mines could use magnetic sensors, Britain having developed magnetic mines in World War I, so everyone removed all metal, including their buttons, and made tools of non-magnetic brass. They disarmed the mine and rushed it to the labs at HMS Vernon, where scientists discovered that the mine had a magnetic arming mechanism. A large ferrous object passing through the Earth's magnetic field will concentrate the field through it, due to its magnetic permeability; the mine's detector was designed to trigger as a ship passed over when the Earth's magnetic field was concentrated in the ship and away from the mine. The mine detected this loss of the magnetic field which caused it to detonate. The mechanism had an adjustable sensitivity, calibrated in milligauss. From this data, known methods were used to clear these mines. Early methods included the use of large electromagnets dragged behind ships or below low-flying aircraft (a number of older bombers like the Vickers Wellington were used for this). Both of these methods had the disadvantage of "sweeping" only a small strip. A better solution was found in the "Double-L Sweep" using electrical cables dragged behind ships that passed large pulses of current through the seawater. This created a large magnetic field and swept the entire area between the two ships. The older methods continued to be used in smaller areas. The Suez Canal continued to be swept by aircraft, for instance. While these methods were useful for clearing mines from local ports, they were of little or no use for enemy-controlled areas. These were typically visited by warships, and the majority of the fleet then underwent a massive degaussing process, where their hulls had a slight "south" bias induced into them which offset the concentration-effect almost to zero. Initially, major warships and large troopships had a copper degaussing coil fitted around the perimeter of the hull, energized by the ship's electrical system whenever in suspected magnetic-mined waters. Some of the first to be so fitted were the carrier and the liners and . It was a photo of one of these liners in New York harbour, showing the degaussing coil, which revealed to German Naval Intelligence the fact that the British were using degaussing methods to combat their magnetic mines. This was felt to be impractical for smaller warships and merchant vessels, mainly because the ships lacked the generating capacity to energise such a coil. It was found that "wiping" a current-carrying cable up and down a ship's hull temporarily canceled the ships' magnetic signature sufficiently to nullify the threat. This started in late 1939, and by 1940 merchant vessels and the smaller British warships were largely immune for a few months at a time until they once again built up a field. The cruiser is just one example of a ship that was struck by a magnetic mine during this time. On 21 November 1939, a mine broke her keel, which damaged her engine and boiler rooms, as well as injuring 46 men, one later died from his injuries. She was towed to Rosyth for repairs. Incidents like this resulted in many of the boats that sailed to Dunkirk being degaussed in a marathon four-day effort by degaussing stations. The Allies and Germany deployed acoustic mines in World War II, against which even wooden-hulled ships (in particular minesweepers) remained vulnerable. Japan developed sonic generators to sweep these; the gear was not ready by war's end. The primary method Japan used was small air-delivered bombs. This was profligate and ineffectual; used against acoustic mines at Penang, 200 bombs were needed to detonate just 13 mines. The Germans developed a pressure-activated mine and planned to deploy it as well, but they saved it for later use when it became clear the British had defeated the magnetic system. The U.S. also deployed these, adding "counters" which would allow a variable number of ships to pass unharmed before detonating. This made them a great deal harder to sweep. Mining campaigns could have devastating consequences. The U.S. effort against Japan, for instance, closed major ports, such as Hiroshima, for days, and by the end of the Pacific War had cut the amount of freight passing through Kobe–Yokohama by 90%. When the war ended, more than 25,000 U.S.-laid mines were still in place, and the Navy proved unable to sweep them all, limiting efforts to critical areas. After sweeping for almost a year, in May 1946, the Navy abandoned the effort with 13,000 mines still unswept. Over the next thirty years, more than 500 minesweepers (of a variety of types) were damaged or sunk clearing them. The U.S. began adding delay counters to their magnetic mines in June 1945. Cold War era Since World War II, mines have damaged 14 United States Navy ships, whereas air and missile attacks have damaged four. During the Korean War, mines laid by North Korean forces caused 70% of the casualties suffered by U.S. naval vessels and caused 4 sinkings. During the Iran–Iraq War from 1980 to 1988, the belligerents mined several areas of the Persian Gulf and nearby waters. On 24 July 1987, the supertanker SS Bridgeton was mined by Iran near Farsi Island. On 14 April 1988, struck an Iranian mine in the central Persian Gulf shipping lane, wounding 10 sailors. In the summer of 1984, magnetic sea mines damaged at least 19 ships in the Red Sea. The U.S. concluded Libya was probably responsible for the minelaying. In response the U.S., Britain, France, and three other nations launched Operation Intense Look, a minesweeping operation in the Red Sea involving more than 46 ships. On the orders of the Reagan administration, the CIA mined Nicaragua's Sandino port in 1984 in support of the Contras. A Soviet tanker was among the ships damaged by these mines. In 1986, in the case of Nicaragua v. United States, the International Court of Justice ruled that this mining was a violation of international law. Post Cold War During the Gulf War, Iraqi naval mines severely damaged and . When the war concluded, eight countries conducted clearance operations. Houthi forces in the Yemeni Civil War have made frequent use of naval mines, laying over 150 in the Red Sea throughout the conflict. In the first month of the 2022 Russian invasion of Ukraine, Ukraine accused Russia of deliberately employing drifting mines in the Black Sea area. Around the same time, Turkish and Romanian military diving teams were involved in defusing operations, when stray mines were spotted near the coasts of these countries. London P&I Club issued a warning to freight ships in the area, advising them to "maintain lookouts for mines and pay careful attention to local navigation warnings". Ukrainian forces have mined "from the Sea of Azov to the Black Sea which banks the critical city of Odesa." Types Naval mines may be classified into three major groups; contact, remote and influence mines. Contact mines The earliest mines were usually of this type. They are still used today, as they are extremely low cost compared to any other anti-ship weapon and are effective, both as a psychological weapon and as a method to sink enemy ships. Contact mines need to be touched by the target before they detonate, limiting the damage to the direct effects of the explosion and usually affecting only the vessel that triggers them. Early mines had mechanical mechanisms to detonate them, but these were superseded in the 1870s by the "Hertz horn" (or "chemical horn"), which was found to work reliably even after the mine had been in the sea for several years. The mine's upper half is studded with hollow lead protuberances, each containing a glass vial filled with sulfuric acid. When a ship's hull crushes the metal horn, it cracks the vial inside it, allowing the acid to run down a tube and into a lead–acid battery which until then contained no acid electrolyte. This energizes the battery, which detonates the explosive. Earlier forms of the detonator employed a vial of sulfuric acid surrounded by a mixture of potassium perchlorate and sugar. When the vial was crushed, the acid ignited the perchlorate-sugar mix, and the resulting flame ignited the gunpowder charge. During the initial period of World War I, the Royal Navy used contact mines in the English Channel and later in large areas of the North Sea to hinder patrols by German submarines. Later, the American antenna mine was widely used because submarines could be at any depth from the surface to the seabed. This type of mine had a copper wire attached to a buoy that floated above the explosive charge which was weighted to the seabed with a steel cable. If a submarine's steel hull touched the copper wire, the slight voltage change caused by contact between two dissimilar metals was amplified and detonated the explosives. Limpet mines Limpet mines are a special form of contact mine that are manually attached to the target by magnets and remain in place. They are named because of the similarity to the limpet, a mollusk. Moored contact mines Generally, this type of mine is set to float just below the surface of the water or as deep as five meters. A steel cable connecting the mine to an anchor on the seabed prevents it from drifting away. The explosive and detonating mechanism is contained in a buoyant metal or plastic shell. The depth below the surface at which the mine floats can be set so that only deep draft vessels such as aircraft carriers, battleships or large cargo ships are at risk, saving the mine from being used on a less valuable target. In littoral waters it is important to ensure that the mine does not become visible when the sea level falls at low tide, so the cable length is adjusted to take account of tides. During WWII there were mines that could be moored in -deep water. Floating mines typically have a mass of around , including of explosives e.g. TNT, minol or amatol. Moored contact mines with plummet A special form of moored contact mines are those equipped with a plummet. When the mine is launched (1), the mine with the anchor floats first and the lead plummet sinks from it (2). In doing so, the plummet unwinds a wire, the deep line, which is used to set the depth of the mine below the water surface before it is launched (3). When the deep line has been unwound to a set length, the anchor is flooded and the mine is released from the anchor (4). The anchor begins to sink and the mooring cable unwinds until the plummet reaches the sea floor (5). Triggered by the decreasing tension on the deep line, the mooring cable is clamped. The anchor continues sinking down to the bottom of the sea, pulling the mine below the water surface to a depth equal to the length of the deep line (6). Thus, even without knowing the exact seafloor depth, an exact depth of the mine below the water surface can be set, limited only by the maximum length of the mooring cable. Drifting contact mines Drifting mines were occasionally used during World War I and World War II. However, they were more feared than effective. Sometimes floating mines break from their moorings and become drifting mines; modern mines are designed to deactivate in this event. After several years at sea, the deactivation mechanism might not function as intended and the mines may remain live. Admiral Jellicoe's British fleet did not pursue and destroy the outnumbered German High Seas Fleet when it turned away at the Battle of Jutland because he thought they were leading him into a trap: he believed it possible that the Germans were either leaving floating mines in their wake, or were drawing him towards submarines, although neither of these was the case. After World War I the drifting contact mine was banned, but was occasionally used during World War II. The drifting mines were much harder to remove than tethered mines after the war, and they caused about the same damage to both sides. Churchill promoted "Operation Royal Marine" in 1940 and again in 1944 where floating mines were put into the Rhine in France to float down the river, becoming active after a time calculated to be long enough to reach German territory. Remotely controlled mines Frequently used in combination with coastal artillery and hydrophones, controlled mines (or command detonation mines) can be in place in peacetime, which is a huge advantage in blocking important shipping routes. The mines can usually be turned into "normal" mines with a switch (which prevents the enemy from simply capturing the controlling station and deactivating the mines), detonated on a signal or be allowed to detonate on their own. The earliest ones were developed around 1812 by Robert Fulton. The first remotely controlled mines were moored mines used in the American Civil War, detonated electrically from shore. They were considered superior to contact mines because they did not put friendly shipping at risk. The extensive American fortifications program initiated by the Board of Fortifications in 1885 included remotely controlled mines, which were emplaced or in reserve from the 1890s until the end of World War II. Modern examples usually weigh , including of explosives (TNT or torpex). Influence mines These mines are triggered by the influence of a ship or submarine, rather than direct contact. Such mines incorporate sensors designed to detect the presence of a vessel and detonate when it comes within the blast range of the warhead. The fuzes on such mines may incorporate one or more of the following sensors: magnetic, passive acoustic or water pressure displacement caused by the proximity of a vessel. First used during WWI, their use became more general in WWII. The sophistication of influence mine fuzes has increased considerably over the years as first transistors and then microprocessors have been incorporated into designs. Simple magnetic sensors have been superseded by total-field magnetometers. Whereas early magnetic mine fuzes would respond only to changes in a single component of a target vessel's magnetic field, a total field magnetometer responds to changes in the magnitude of the total background field (thus enabling it to better detect even degaussed ships). Similarly, the original broadband hydrophones of 1940s acoustic mines (which operate on the integrated volume of all frequencies) have been replaced by narrow-band sensors which are much more sensitive and selective. Mines can now be programmed to listen for highly specific acoustic signatures (e.g. a gas turbine powerplant or cavitation sounds from a particular design of propeller) and ignore all others. The sophistication of modern electronic mine fuzes incorporating these digital signal processing capabilities makes it much more difficult to detonate the mine with electronic countermeasures because several sensors working together (e.g. magnetic, passive acoustic and water pressure) allow it to ignore signals which are not recognised as being the unique signature of an intended target vessel. Modern influence mines such as the BAE Stonefish are computerised, with all the programmability this implies, such as the ability to quickly load new acoustic signatures into fuzes, or program them to detect a single, highly distinctive target signature. In this way, a mine with a passive acoustic fuze can be programmed to ignore all friendly vessels and small enemy vessels, only detonating when a very large enemy target passes over it. Alternatively, the mine can be programmed specifically to ignore all surface vessels regardless of size and exclusively target submarines. Even as far back as WWII it was possible to incorporate a "ship counter" function in mine fuzes. This might set the mine to ignore the first two ships passing over it (which could be minesweepers deliberately trying to trigger mines) but detonate when the third ship passes overhead, which could be a high-value target such as an aircraft carrier or oil tanker. Even though modern mines are generally powered by a long life lithium battery, it is important to conserve power because they may need to remain active for months or even years. For this reason, most influence mines are designed to remain in a semi-dormant state until an unpowered (e.g. deflection of a mu-metal needle) or low-powered sensor detects the possible presence of a vessel, at which point the mine fuze powers up fully and the passive acoustic sensors will begin to operate for some minutes. It is possible to program computerised mines to delay activation for days or weeks after being laid. Similarly, they can be programmed to self-destruct or render themselves safe after a preset period of time. Generally, the more sophisticated the mine design, the more likely it is to have some form of anti-handling device to hinder clearance by divers or remotely piloted submersibles. Moored mines The moored mine is the backbone of modern mine systems. They are deployed where water is too deep for bottom mines. They can use several kinds of instruments to detect an enemy, usually a combination of acoustic, magnetic and pressure sensors, or more sophisticated optical shadows or electro potential sensors. These cost many times more than contact mines. Moored mines are effective against most kinds of ships. As they are cheaper than other anti-ship weapons they can be deployed in large numbers, making them useful area denial or "channelizing" weapons. Moored mines usually have lifetimes of more than 10 years, and some almost unlimited. These mines usually weigh , including of explosives (RDX). In excess of of explosives the mine becomes inefficient, as it becomes too large to handle and the extra explosives add little to the mine's effectiveness. Bottom mines Bottom mines (sometimes called ground mines) are used when the water is no more than deep or when mining for submarines down to around . They are much harder to detect and sweep, and can carry a much larger warhead than a moored mine. Bottom mines commonly use multiple types of sensors, which are less sensitive to sweeping. These mines usually weigh between , including between of explosives. Unusual mines Several specialized mines have been developed for other purposes than the common minefield. Bouquet mine The bouquet mine is a single anchor attached to several floating mines. It is designed so that when one mine is swept or detonated, another takes its place. It is a very sensitive construction and lacks reliability. Anti-sweep mine The anti-sweep mine is a very small mine ( warhead) with as small a floating device as possible. When the wire of a mine sweep hits the anchor wire of the mine, it drags the anchor wire along with it, pulling the mine down into contact with the sweeping wire. That detonates the mine and cuts the sweeping wire. They are very cheap and usually used in combination with other mines in a minefield to make sweeping more difficult. One type is the Mark 23 used by the United States during World War II. Oscillating mine The mine is hydrostatically controlled to maintain a pre-set depth below the water's surface independently of the rise and fall of the tide. Ascending mine The ascending mine is a floating distance mine that may cut its mooring or in some other way float higher when it detects a target. It lets a single floating mine cover a much larger depth range. Homing mines These are mines containing a moving weapon as a warhead, either a torpedo or a rocket. Rocket mine A Russian invention, the rocket mine is a bottom distance mine that fires a homing high-speed rocket (not torpedo) upwards towards the target. It is intended to allow a bottom mine to attack surface ships as well as submarines from a greater depth. One type is the Te-1 rocket propelled mine. Torpedo mine A torpedo mine is a self-propelled variety, able to lie in wait for a target and then pursue it e.g. the Mark 60 CAPTOR. Generally, torpedo mines incorporate computerised acoustic and magnetic fuzes. The U.S. Mark 24 "mine", code-named Fido, was actually an ASW homing torpedo. The mine designation was disinformation to conceal its function. Mobile mine The mine is propelled to its intended position by propulsion equipment such as a torpedo. After reaching its destination, it sinks to the seabed and operates like a standard mine. It differs from the homing mine in that its mobile stage is set before it lies in wait, rather than as part of the attacking phase. One such design is the Mk 67 Submarine Launched Mobile Mine (which is based on a Mark 37 torpedo), capable of traveling as far as through or into a channel, harbour, shallow water area, and other zones which would normally be inaccessible to craft laying the device. After reaching the target area they sink to the sea bed and act like conventionally laid influence mines. Nuclear mine During the Cold War, a test was conducted with a naval mine fitted with tactical nuclear warheads for the "Baker" shot of Operation Crossroads. This weapon was experimental and never went into production. The Seabed Arms Control Treaty prohibits the placement of nuclear weapons on the seabed beyond a 12-mile coast zone. Daisy-chained mine This comprises two moored, floating contact mines which are tethered together by a length of steel cable or chain. Typically, each mine is situated approximately away from its neighbor, and each floats a few meters below the surface of the ocean. When the target ship hits the steel cable, the mines on either side are drawn down the side of the ship's hull, exploding on contact. In this manner it is almost impossible for target ships to pass safely between two individually moored mines. Daisy-chained mines are a very simple concept which was used during World War II. The first prototype of the Daisy-chained mine and the first combat use came in Finland, 1939. Dummy mine Plastic drums filled with sand or concrete are periodically rolled off the side of ships as real mines are laid in large mine-fields. These inexpensive false targets (designed to be of a similar shape and size as genuine mines) are intended to slow down the process of mine clearance: a mine-hunter is forced to investigate each suspicious sonar contact on the sea bed, whether it is real or not. Often a maker of naval mines will provide both training and dummy versions of their mines. Mine laying Historically several methods were used to lay mines. During WWI and WWII, the Germans used U-boats to lay mines around the UK. In WWII, aircraft came into favour for mine laying with one of the largest examples being the mining of the Japanese sea routes in Operation Starvation. Laying a minefield is a relatively fast process with specialized ships, which is today the most common method. These minelayers can carry several thousand mines and manoeuvre with high precision. The mines are dropped at predefined intervals into the water behind the ship. Each mine is recorded for later clearing, but it is not unusual for these records to be lost together with the ships. Therefore, many countries demand that all mining operations be planned on land and records kept so that the mines can later be recovered more easily. Other methods to lay minefields include: Converted merchant ships – rolled or slid down ramps Aircraft – descent to the water is slowed by a parachute Submarines – launched from torpedo tubes or deployed from specialized mine racks on the sides of the submarine Combat boats – rolled off the side of the boat Camouflaged boats – masquerading as fishing boats Dropping from the shore – typically smaller, shallow-water mines Attack divers – smaller shallow-water mines In some cases, mines are automatically activated upon contact with the water. In others, a safety lanyard is pulled (one end attached to the rail of a ship, aircraft or torpedo tube) which starts an automatic timer countdown before the arming process is complete. Typically, the automatic safety-arming process takes some minutes to complete. This allows the people laying the mines sufficient time to move out of its activation and blast zones. Aerial mining in World War II Germany In the 1930s, Germany had experimented with the laying of mines by aircraft. It became a crucial element in their overall mining strategy. Aircraft had the advantage of speed, and they would never get caught in their own minefields. German mines held a large explosive charge. From April to June 1940, the Luftwaffe laid 1,000 mines in British waters. Soviet ports were mined, as was the Arctic convoy route to Murmansk. The Heinkel He 115 could carry two medium or one large mine while the Heinkel He 59, Dornier Do 18, Junkers Ju 88 and Heinkel He 111 could carry more. Soviet Union The USSR was relatively ineffective in its use of naval mines in WWII in comparison with its record in previous wars. Small mines were developed for use in rivers and lakes, and special mines for shallow water. A very large chemical mine was designed to sink through ice with the aid of a melting compound. Special aerial mine designs finally arrived in 1943–1944, the AMD-500 and AMD-1000. Various Soviet Naval Aviation torpedo bombers were pressed into the role of aerial mining in the Baltic Sea and the Black Sea, including Ilyushin DB-3s, Il-4s and Lend-Lease Douglas Boston IIIs. United Kingdom In September 1939, the UK announced the placement of extensive defensive minefields in waters surrounding the Home Islands. Offensive aerial mining operations began in April 1940 when 38 mines were laid at each of these locations: the Elbe River, the port of Lübeck and the German naval base at Kiel. In the next 20 months, mines delivered by aircraft sank or damaged 164 Axis ships with the loss of 94 aircraft. By comparison, direct aerial attacks on Axis shipping had sunk or damaged 105 vessels at a cost of 373 aircraft lost. The advantage of aerial mining became clear, and the UK prepared for it. A total of 48,000 aerial mines were laid by the Royal Air Force (RAF) in the European Theatre during World War II. United States As early as 1942, American mining experts such as Naval Ordnance Laboratory scientist Dr. Ellis A. Johnson, CDR USNR, suggested massive aerial mining operations against Japan's "outer zone" (Korea and northern China) as well as the "inner zone", their home islands. First, aerial mines would have to be developed further and manufactured in large numbers. Second, laying the mines would require a sizable air group. The US Army Air Forces had the carrying capacity but considered mining to be the navy's job. The US Navy lacked suitable aircraft. Johnson set about convincing General Curtis LeMay of the efficacy of heavy bombers laying aerial mines. B-24 Liberators, PBY Catalinas and other bomber aircraft took part in localized mining operations in the Southwest Pacific and the China Burma India (CBI) theaters, beginning with a successful attack on the Yangon River in February 1943. Aerial minelaying operations involved a coalition of British, Australian and American aircrews, with the RAF and the Royal Australian Air Force (RAAF) carrying out 60% of the sorties and the USAAF and US Navy covering 40%. Both British and American mines were used. Japanese merchant shipping suffered tremendous losses, while Japanese mine sweeping forces were spread too thin attending to far-flung ports and extensive coastlines. Admiral Thomas C. Kinkaid, who directed nearly all RAAF mining operations in CBI, heartily endorsed aerial mining, writing in July 1944 that "aerial mining operations were of the order of 100 times as destructive to the enemy as an equal number of bombing missions against land targets." A single B-24 dropped three mines into Haiphong harbour in October 1943. One of those mines sank a Japanese freighter. Another B-24 dropped three more mines into the harbour in November, and a second freighter was sunk by a mine. The threat of the remaining mines prevented a convoy of ten ships from entering Haiphong, and six of those ships were sunk by attacks before they reached a safe harbour. The Japanese closed Haiphong to all steel-hulled ships for the remainder of the war after another small ship was sunk by one of the remaining mines, although they may not have realized no more than three mines remained. Using Grumman TBF Avenger torpedo bombers, the US Navy mounted a direct aerial mining attack on enemy shipping in Palau on 30 March 1944 in concert with simultaneous conventional bombing and strafing attacks. The dropping of 78 mines deterred 32 Japanese ships from escaping Koror harbour, and 23 of those immobilized ships were sunk in a subsequent bombing raid. The combined operation sank or damaged 36 ships. Two Avengers were lost, and their crews were recovered. The mines brought port usage to a halt for 20 days. Japanese mine sweeping was unsuccessful; and the Japanese abandoned Palau as a base when their first ship attempting to traverse the swept channel was damaged by a mine detonation. In March 1945, Operation Starvation began in earnest, using 160 of LeMay's B-29 Superfortress bombers to attack Japan's inner zone. Almost half of the mines were the US-built Mark 25 model, carrying of explosives and weighing about . Other mines used included the smaller Mark 26. Fifteen B-29s were lost while 293 Japanese merchant ships were sunk or damaged. Twelve thousand aerial mines were laid, a significant barrier to Japan's access to outside resources. Prince Fumimaro Konoe said after the war that the aerial mining by B-29s had been "equally as effective as the B-29 attacks on Japanese industry at the closing stages of the war when all food supplies and critical material were prevented from reaching the Japanese home islands." The United States Strategic Bombing Survey (Pacific War) concluded that it would have been more efficient to combine the United States's effective anti-shipping submarine effort with land- and carrier-based air power to strike harder against merchant shipping and begin a more extensive aerial mining campaign earlier in the war. Survey analysts projected that this would have starved Japan, forcing an earlier end to the war. After the war, Dr. Johnson looked at the Japan inner zone shipping results, comparing the total economic cost of submarine-delivered mines versus air-dropped mines and found that, though 1 in 12 submarine mines connected with the enemy as opposed to 1 in 21 for aircraft mines, the aerial mining operation was about ten times less expensive per enemy ton sunk. Clearing WWII aerial mines Between 600,000 and 1,000,000 naval mines of all types were laid in WWII. Advancing military forces worked to clear mines from newly-taken areas, but extensive minefields remained in place after the war. Air-dropped mines had an additional problem for mine sweeping operations: they were not meticulously charted. In Japan, much of the B-29 mine-laying work had been performed at high altitude, with the drifting on the wind of mines carried by parachute adding a randomizing factor to their placement. Generalized danger areas were identified, with only the quantity of mines given in detail. Mines used in Operation Starvation were supposed to be self-sterilizing, but the circuit did not always work. Clearing the mines from Japanese waters took so many years that the task was eventually given to the Japan Maritime Self-Defense Force. For the purpose of clearing all types of naval mines, the Royal Navy employed German crews and minesweepers from June 1945 to January 1948, organised in the German Mine Sweeping Administration (GMSA), which consisted of 27,000 members of the former Kriegsmarine and 300 vessels. Mine clearing was not always successful: a number of ships were damaged or sunk by mines after the war. Two such examples were the liberty ships Pierre Gibault which was scrapped after hitting a mine in a previously cleared area off the Greek island of Kythira in June 1945, and Nathaniel Bacon which hit a minefield off Civitavecchia, Italy in December 1945, caught fire, was beached, and broke in two. Damage The damage that may be caused by a mine depends on the "shock factor value", a combination of the initial strength of the explosion and of the distance between the target and the detonation. When taken in reference to ship hull plating, the term "Hull Shock Factor" (HSF) is used, while keel damage is termed "Keel Shock Factor" (KSF). If the explosion is directly underneath the keel, then HSF is equal to KSF, but explosions that are not directly underneath the ship will have a lower value of KSF. Direct damage Usually only created by contact mines, direct damage is a hole blown in the ship. Among the crew, fragmentation wounds are the most common form of damage. Flooding typically occurs in one or two main watertight compartments, which can sink smaller ships or disable larger ones. Contact mine damage often occurs at or close to the waterline near the bow, but depending on circumstances a ship could be hit anywhere on its outer hull surface (the mine attack being a good example of a contact mine detonating amidships and underneath the ship). Bubble jet effect The bubble jet effect occurs when a mine or torpedo detonates in the water a short distance away from the targeted ship. The explosion creates a bubble in the water, and due to the difference in pressure, the bubble will collapse from the bottom. The bubble is buoyant, and so it rises towards the surface. If the bubble reaches the surface as it collapses, it can create a pillar of water that can go over a hundred meters into the air (a "columnar plume"). If conditions are right and the bubble collapses onto the ship's hull, the damage to the ship can be extremely serious; the collapsing bubble forms a high-energy jet similar to a shaped charge that can break a metre-wide hole straight through the ship, flooding one or more compartments, and is capable of breaking smaller ships apart. The crew in the areas hit by the pillar are usually killed instantly. Other damage is usually limited. The Baengnyeong incident, in which the ROKS Cheonan broke in half and sank off the coast South Korea in 2010, was caused by the bubble jet effect, according to an international investigation. Shock effect If the mine detonates at a distance from the ship, the change in water pressure causes the ship to resonate. This is frequently the most deadly type of explosion, if it is strong enough. The whole ship is dangerously shaken and everything on board is tossed around. Engines rip from their beds, cables from their holders, etc. A badly shaken ship usually sinks quickly, with hundreds, or even thousands of small leaks all over the ship and no way to power the pumps. The crew fare no better, as the violent shaking tosses them around. This shaking is powerful enough to cause disabling injury to knees and other joints in the body, particularly if the affected person stands on surfaces connected directly to the hull (such as steel decks). The resulting gas cavitation and shock-front-differential over the width of the human body is sufficient to stun or kill divers. Countermeasures Weapons are frequently a few steps ahead of countermeasures, and mines are no exception. In this field the British, with their large seagoing navy, have had the bulk of world experience, and most anti-mine developments, such as degaussing and the double-L sweep, were British inventions. When on operational missions, such as the invasion of Iraq, the US still relies on British and Canadian minesweeping services. The US has worked on some innovative mine-hunting countermeasures, such as the use of military dolphins to detect and flag mines. However, they are of questionable effectiveness. Mines in nearshore environments remain a particular challenge. They are small and as technology has developed they can have anechoic coatings, be non-metallic, and oddly shaped to resist detection. Further, oceanic conditions and the sea bottoms of the area of operations can degrade sweeping and hunting efforts. Mining countermeasures are far more expensive and time-consuming than mining operations, and that gap is only growing with new technologies. Passive countermeasures Ships can be designed to be difficult for mines to detect, to avoid detonating them. This is especially true for minesweepers and mine hunters that work in minefields, where a minimal signature outweighs the need for armour and speed. These ships have hulls of glass fibre or wood instead of steel to avoid magnetic signatures. These ships may use special propulsion systems, with low magnetic electric motors, to reduce magnetic signature, and Voith-Schneider propellers, to limit the acoustic signature. They are built with hulls that produce a minimal pressure signature. These measures create other problems. They are expensive, slow, and vulnerable to enemy fire. Many modern ships have a mine-warning sonar—a simple sonar looking forward and warning the crew if it detects possible mines ahead. It is only effective when the ship is moving slowly. (
Technology
Naval warfare
null
22131
https://en.wikipedia.org/wiki/Natural%20gas
Natural gas
Natural gas (also called fossil gas, methane gas, or simply gas) is a naturally occurring mixture of gaseous hydrocarbons consisting primarily of methane (95%) in addition to various smaller amounts of other higher alkanes. Traces of carbon dioxide, nitrogen, hydrogen sulfide, and helium are also usually present. Methane is colorless and odorless, and the second largest greenhouse gas contributor to global climate change after carbon dioxide. Because natural gas is odorless, odorizers such as mercaptan (which smells like rotten eggs) are commonly added to it for safety so that leaks can be readily detected. Natural gas is a fossil fuel that is formed when layers of organic matter (primarily marine microorganisms) decompose under anaerobic conditions and are subjected to intense heat and pressure underground over millions of years. The energy that the decayed organisms originally obtained from the sun via photosynthesis is stored as chemical energy within the molecules of methane and other hydrocarbons. Natural gas can be burned for heating, cooking, and electricity generation. Consisting mainly of methane, natural gas is rarely used as a chemical feedstock. The extraction and consumption of natural gas is a major industry. When burned for heat or electricity, natural gas emits fewer toxic air pollutants, less carbon dioxide, and almost no particulate matter compared to other fossil and biomass fuels. However, gas venting and unintended fugitive emissions throughout the supply chain can result in natural gas having a similar carbon footprint to other fossil fuels overall. Natural gas can be found in underground geological formations, often alongside other fossil fuels like coal and oil (petroleum). Most natural gas has been created through either biogenic or thermogenic processes. Thermogenic gas takes a much longer period of time to form and is created when organic matter is heated and compressed deep underground. Methanogenic organisms produce methane from a variety of sources, principally carbon dioxide. During petroleum production, natural gas is sometimes flared rather than being collected and used. Before natural gas can be burned as a fuel or used in manufacturing processes, it almost always has to be processed to remove impurities such as water. The byproducts of this processing include ethane, propane, butanes, pentanes, and higher molecular weight hydrocarbons. Hydrogen sulfide (which may be converted into pure sulfur), carbon dioxide, water vapor, and sometimes helium and nitrogen must also be removed. Natural gas is sometimes informally referred to simply as "gas", especially when it is being compared to other energy sources, such as oil, coal or renewables. However, it is not to be confused with gasoline, which is also shortened in colloquial usage to "gas", especially in North America. Natural gas is measured in standard cubic meters or standard cubic feet. The density compared to air ranges from 0.58 (16.8 g/mole, 0.71 kg per standard cubic meter) to as high as 0.79 (22.9 g/mole, 0.97 kg per scm), but generally less than 0.64 (18.5 g/mole, 0.78 kg per scm). For comparison, pure methane (16.0425 g/mole) has a density 0.5539 times that of air (0.678 kg per standard cubic meter). Name In the early 1800s, natural gas became known as "natural" to distinguish it from the dominant gas fuel at the time, coal gas. Unlike coal gas, which is manufactured by heating coal, natural gas can be extracted from the ground in its native gaseous form. When the use of natural gas overtook the use of coal gas in English speaking countries in the 20th century, it was increasingly referred to as simply "gas." In order to highlight its role in exacerbating the climate crisis, however, many organizations have criticized the continued use of the word "natural" in referring to the gas. These advocates prefer the term "fossil gas" or "methane gas" as better conveying to the public its climate threat. A 2020 study of Americans' perceptions of the fuel found that, across political identifications, the term "methane gas" led to better estimates of its harms and risks. History Natural gas can come out of the ground and cause a long-burning fire. In ancient Greece, the gas flames at Mount Chimaera contributed to the legend of the fire-breathing creature Chimera. In ancient China, gas resulting from the drilling for brines was first used by about 400 BC. The Chinese transported gas seeping from the ground in crude pipelines of bamboo to where it was used to boil salt water to extract the salt in the Ziliujing District of Sichuan. Natural gas was not widely used before the development of long distance pipelines in the early twentieth century. Before that, most use was near to the source of the well, and the predominant gas for fuel and lighting during the industrial revolution was manufactured coal gas. The history of natural gas in the United States begins with localized use. In the seventeenth century, French missionaries witnessed the American Indians setting fire to natural gas seeps around Lake Erie, and scattered observations of these seeps were made by European-descended settlers throughout the eastern seaboard through the 1700s. In 1821, William Hart dug the first commercial natural gas well in the United States at Fredonia, New York, United States, which led in 1858 to the formation of the Fredonia Gas Light Company. Further such ventures followed near wells in other states, until technological innovations allowed the growth of major long distance pipelines from the 1920s onward. By 2009, (or 8%) had been used out of the total of estimated remaining recoverable reserves of natural gas. Sources Natural gas In the 19th century, natural gas was primarily obtained as a by-product of producing oil. The small, light gas carbon chains came out of solution as the extracted fluids underwent pressure reduction from the reservoir to the surface, similar to uncapping a soft drink bottle where the carbon dioxide effervesces. The gas was often viewed as a by-product, a hazard, and a disposal problem in active oil fields. The large volumes produced could not be used until relatively expensive pipeline and storage facilities were constructed to deliver the gas to consumer markets. Until the early part of the 20th century, most natural gas associated with oil was either simply released or burned off at oil fields. Gas venting and production flaring are still practised in modern times, but efforts are ongoing around the world to retire them, and to replace them with other commercially viable and useful alternatives. In addition to transporting gas via pipelines for use in power generation, other end uses for natural gas include export as liquefied natural gas (LNG) or conversion of natural gas into other liquid products via gas to liquids (GTL) technologies. GTL technologies can convert natural gas into liquids products such as gasoline, diesel or jet fuel. A variety of GTL technologies have been developed, including Fischer–Tropsch (F–T), methanol to gasoline (MTG) and syngas to gasoline plus (STG+). F–T produces a synthetic crude that can be further refined into finished products, while MTG can produce synthetic gasoline from natural gas. STG+ can produce drop-in gasoline, diesel, jet fuel and aromatic chemicals directly from natural gas via a single-loop process. In 2011, Royal Dutch Shell's per day F–T plant went into operation in Qatar. Natural gas can be "associated" (found in oil fields), or "non-associated" (isolated in natural gas fields), and is also found in coal beds (as coalbed methane). It sometimes contains a significant amount of ethane, propane, butane, and pentane—heavier hydrocarbons removed for commercial use prior to the methane being sold as a consumer fuel or chemical plant feedstock. Non-hydrocarbons such as carbon dioxide, nitrogen, helium (rarely), and hydrogen sulfide must also be removed before the natural gas can be transported. Natural gas extracted from oil wells is called casinghead gas (whether or not truly produced up the annulus and through a casinghead outlet) or associated gas. The natural gas industry is extracting an increasing quantity of gas from challenging, unconventional resource types: sour gas, tight gas, shale gas, and coalbed methane. There is some disagreement on which country has the largest proven gas reserves. Sources that consider that Russia has by far the largest proven reserves include the US Central Intelligence Agency (47,600 km3) and Energy Information Administration (47,800 km3), as well as the Organization of Petroleum Exporting Countries (48,700 km3). Contrarily, BP credits Russia with only 32,900 km3, which would place it in second, slightly behind Iran (33,100 to 33,800 km3, depending on the source). It is estimated that there are about 900,000 km3 of "unconventional" gas such as shale gas, of which 180,000 km3 may be recoverable. In turn, many studies from MIT, Black & Veatch and the US Department of Energy predict that natural gas will account for a larger portion of electricity generation and heat in the future. The world's largest gas field is the offshore South Pars / North Dome Gas-Condensate field, shared between Iran and Qatar. It is estimated to have of natural gas and of natural gas condensates. Because natural gas is not a pure product, as the reservoir pressure drops when non-associated gas is extracted from a field under supercritical (pressure/temperature) conditions, the higher molecular weight components may partially condense upon isothermic depressurizing—an effect called retrograde condensation. The liquid thus formed may get trapped as the pores of the gas reservoir get depleted. One method to deal with this problem is to re-inject dried gas free of condensate to maintain the underground pressure and to allow re-evaporation and extraction of condensates. More frequently, the liquid condenses at the surface, and one of the tasks of the gas plant is to collect this condensate. The resulting liquid is called natural gas liquid (NGL) and has commercial value. Shale gas Shale gas is natural gas produced from shale. Because shale's matrix permeability is too low to allow gas to flow in economical quantities, shale gas wells depend on fractures to allow the gas to flow. Early shale gas wells depended on natural fractures through which gas flowed; almost all shale gas wells today require fractures artificially created by hydraulic fracturing. Since 2000, shale gas has become a major source of natural gas in the United States and Canada. Because of increased shale gas production the United States was in 2014 the number one natural gas producer in the world. The production of shale gas in the United States has been described as a "shale gas revolution" and as "one of the landmark events in the 21st century." Following the increased production in the United States, shale gas exploration is beginning in countries such as Poland, China, and South Africa. Chinese geologists have identified the Sichuan Basin as a promising target for shale gas drilling, because of the similarity of shales to those that have proven productive in the United States. Production from the Wei-201 well is between 10,000 and 20,000 m3 per day. In late 2020, China National Petroleum Corporation claimed daily production of 20 million cubic meters of gas from its Changning-Weiyuan demonstration zone. Town gas Town gas is a flammable gaseous fuel made by the destructive distillation of coal. It contains a variety of calorific gases including hydrogen, carbon monoxide, methane, and other volatile hydrocarbons, together with small quantities of non-calorific gases such as carbon dioxide and nitrogen, and was used in a similar way to natural gas. This is a historical technology and is not usually economically competitive with other sources of fuel gas today. Most town "gashouses" located in the eastern US in the late 19th and early 20th centuries were simple by-product coke ovens that heated bituminous coal in air-tight chambers. The gas driven off from the coal was collected and distributed through networks of pipes to residences and other buildings where it was used for cooking and lighting. (Gas heating did not come into widespread use until the last half of the 20th century.) The coal tar (or asphalt) that collected in the bottoms of the gashouse ovens was often used for roofing and other waterproofing purposes, and when mixed with sand and gravel was used for paving streets. Crystallized natural gas – clathrates Huge quantities of natural gas (primarily methane) exist in the form of clathrates under sediment on offshore continental shelves and on land in arctic regions that experience permafrost, such as those in Siberia. Hydrates require a combination of high pressure and low temperature to form. In 2013, Japan Oil, Gas and Metals National Corporation (JOGMEC) announced that they had recovered commercially relevant quantities of natural gas from methane hydrate. Processing The image below is a schematic block flow diagram of a typical natural gas processing plant. It shows the various unit processes used to convert raw natural gas into sales gas pipelined to the end user markets. The block flow diagram also shows how processing of the raw natural gas yields byproduct sulfur, byproduct ethane, and natural gas liquids (NGL) propane, butanes and natural gasoline (denoted as pentanes +). Demand As of mid-2020, natural gas production in the US had peaked three times, with current levels exceeding both previous peaks. It reached 24.1 trillion cubic feet per year in 1973, followed by a decline, and reached 24.5 trillion cubic feet in 2001. After a brief drop, withdrawals increased nearly every year since 2006 (owing to the shale gas boom), with 2017 production at 33.4 trillion cubic feet and 2019 production at 40.7 trillion cubic feet. After the third peak in December 2019, extraction continued to fall from March onward due to decreased demand caused by the COVID-19 pandemic in the US. The 2021 global energy crisis was driven by a global surge in demand as the world quit the economic recession caused by COVID-19, particularly due to strong energy demand in Asia. Storage and transport Because of its low density, it is not easy to store natural gas or to transport it by vehicle. Natural gas pipelines are impractical across oceans, since the gas needs to be cooled down and compressed, as the friction in the pipeline causes the gas to heat up. Many existing pipelines in the US are close to reaching their capacity, prompting some politicians representing northern states to speak of potential shortages. The large trade cost implies that natural gas markets are globally much less integrated, causing significant price differences across countries. In Western Europe, the gas pipeline network is already dense. New pipelines are planned or under construction between Western Europe and the Near East or Northern Africa. Whenever gas is bought or sold at custody transfer points, rules and agreements are made regarding the gas quality. These may include the maximum allowable concentration of , and . Usually sales quality gas that has been treated to remove contamination is traded on a "dry gas" basis and is required to be commercially free from objectionable odours, materials, and dust or other solid or liquid matter, waxes, gums and gum forming constituents, which might damage or adversely affect operation of equipment downstream of the custody transfer point. Based on their geographic origin, H-gas (high-calorific gas) and L-gas (low-calorific gas) are to be distinguished. Both types require separate transport, leading to two separate pipeline networks, e.g. in parts of Germany (with a strengthened focus and transition towards H-gas, as the L-gas reservoirs in Germany and the Netherlands are declining). LNG carrier ships transport liquefied natural gas (LNG) across oceans, while tank trucks can carry LNG or compressed natural gas (CNG) over shorter distances. Sea transport using CNG carrier ships that are now under development may be competitive with LNG transport in specific conditions. Gas is turned into liquid at a liquefaction plant, and is returned to gas form at regasification plant at the terminal. Shipborne regasification equipment is also used. LNG is the preferred form for long distance, high volume transportation of natural gas, whereas pipeline is preferred for transport for distances up to over land and approximately half that distance offshore. CNG is transported at high pressure, typically above . Compressors and decompression equipment are less capital intensive and may be economical in smaller unit sizes than liquefaction/regasification plants. Natural gas trucks and carriers may transport natural gas directly to end-users, or to distribution points such as pipelines. In the past, the natural gas which was recovered in the course of recovering petroleum could not be profitably sold, and was simply burned at the oil field in a process known as flaring. Flaring is now illegal in many countries. Additionally, higher demand in the last 20–30 years has made production of gas associated with oil economically viable. As a further option, the gas is now sometimes re-injected into the formation for enhanced oil recovery by pressure maintenance as well as miscible or immiscible flooding. Conservation, re-injection, or flaring of natural gas associated with oil is primarily dependent on proximity to markets (pipelines), and regulatory restrictions. Natural gas can be indirectly exported through the absorption in other physical output. The expansion of shale gas production in the US has caused prices to drop relative to other countries. This has caused a boom in energy intensive manufacturing sector exports, whereby the average dollar unit of US manufacturing exports has almost tripled its energy content between 1996 and 2012. A "master gas system" was invented in Saudi Arabia in the late 1970s, ending any necessity for flaring. Satellite and nearby infra-red camera observations, however, shows that flaring and venting are still happening in some countries. Natural gas is used to generate electricity and heat for desalination. Similarly, some landfills that also discharge methane gases have been set up to capture the methane and generate electricity. Natural gas is often stored underground [references about geological storage needed]inside depleted gas reservoirs from previous gas wells, salt domes, or in tanks as liquefied natural gas. The gas is injected in a time of low demand and extracted when demand picks up. Storage nearby end users helps to meet volatile demands, but such storage may not always be practicable. With 15 countries accounting for 84% of the worldwide extraction, access to natural gas has become an important issue in international politics, and countries vie for control of pipelines. In the first decade of the 21st century, Gazprom, the state-owned energy company in Russia, engaged in disputes with Ukraine and Belarus over the price of natural gas, which have created concerns that gas deliveries to parts of Europe could be cut off for political reasons. The United States is preparing to export natural gas. Floating liquefied natural gas Floating liquefied natural gas (FLNG) is an innovative technology designed to enable the development of offshore gas resources that would otherwise remain untapped due to environmental or economic factors which currently make them impractical to develop via a land-based LNG operation. FLNG technology also provides a number of environmental and economic advantages: Environmental – Because all processing is done at the gas field, there is no requirement for long pipelines to shore, compression units to pump the gas to shore, dredging and jetty construction, and onshore construction of an LNG processing plant, which significantly reduces the environmental footprint. Avoiding construction also helps preserve marine and coastal environments. In addition, environmental disturbance will be minimised during decommissioning because the facility can easily be disconnected and removed before being refurbished and re-deployed elsewhere. Economic – Where pumping gas to shore can be prohibitively expensive, FLNG makes development economically viable. As a result, it will open up new business opportunities for countries to develop offshore gas fields that would otherwise remain stranded, such as those offshore East Africa. Many gas and oil companies are considering the economic and environmental benefits of floating liquefied natural gas (FLNG). There are currently projects underway to construct five FLNG facilities. Petronas is close to completion on their FLNG-1 at Daewoo Shipbuilding and Marine Engineering and are underway on their FLNG-2 project at Samsung Heavy Industries. Shell Prelude is due to start production 2017. The Browse LNG project will commence FEED in 2019. Uses Natural gas is primarily used in the northern hemisphere. North America and Europe are major consumers. Often well head gases require removal of various hydrocarbon molecules contained within the gas. Some of these gases include heptane, pentane, propane and other hydrocarbons with molecular weights above methane (). The natural gas transmission lines extend to the natural gas processing plant or unit which removes the higher-molecular weight hydrocarbons to produce natural gas with energy content between . The processed natural gas may then be used for residential, commercial and industrial uses. Mid-stream natural gas Natural gas flowing in the distribution lines is called mid-stream natural gas and is often used to power engines which rotate compressors. These compressors are required in the transmission line to pressurize and repressurize the mid-stream natural gas as the gas travels. Typically, natural gas powered engines require natural gas to operate at the rotational name plate specifications. Several methods are used to remove these higher molecular weighted gases for use by the natural gas engine. A few technologies are as follows: Joule–Thomson skid Cryogenic or chiller system Chemical enzymology system Power generation Domestic use In the US, over one-third of households (>40 million homes) cook with gas. Natural gas dispensed in a residential setting can generate temperatures in excess of making it a powerful domestic cooking and heating fuel. Stanford scientists estimated that gas stoves emit 0.8–1.3% of the gas they use as unburned methane and that total U.S. stove emissions are 28.1 gigagrams of methane. In much of the developed world it is supplied through pipes to homes, where it is used for many purposes including ranges and ovens, heating/cooling, outdoor and portable grills, and central heating. Heaters in homes and other buildings may include boilers, furnaces, and water heaters. Both North America and Europe are major consumers of natural gas. Domestic appliances, furnaces, and boilers use low pressure, usually with a standard pressure around over atmospheric pressure. The pressures in the supply lines vary, either the standard utilization pressure (UP) mentioned above or elevated pressure (EP), which may be anywhere from over atmospheric pressure. Systems using EP have a regulator at the service entrance to step down to UP. Natural gas piping systems inside buildings are often designed with pressures of , and have downstream pressure regulators to reduce pressure as needed. In the United States the maximum allowable operating pressure for natural gas piping systems within a building is based on NFPA 54: National Fuel Gas Code, except when approved by the Public Safety Authority or when insurance companies have more stringent requirements. Generally, natural gas system pressures are not allowed to exceed unless all of the following conditions are met: The AHJ will allow a higher pressure. The distribution pipe is welded. (Note: 2. Some jurisdictions may also require that welded joints be radiographed to verify continuity). The pipes are closed for protection and placed in a ventilated area that does not allow gas accumulation. The pipe is installed in the areas used for industrial processes, research, storage or mechanical equipment rooms. Generally, a maximum liquefied petroleum gas pressure of is allowed, provided the building is constructed in accordance with NFPA 58: Liquefied Petroleum Gas Code, Chapter 7. A seismic earthquake valve operating at a pressure of 55 psig (3.7 bar) can stop the flow of natural gas into the site wide natural gas distribution piping network (that runs (outdoors underground, above building roofs, and or within the upper supports of a canopy roof). Seismic earthquake valves are designed for use at a maximum of 60 psig. In Australia, natural gas is transported from gas processing facilities to regulator stations via transmission pipelines. Gas is then regulated down to distributed pressures and the gas is distributed around a gas network via gas mains. Small branches from the network, called services, connect individual domestic dwellings, or multi-dwelling buildings to the network. The networks typically range in pressures from 7 kPa (low pressure) to 515 kPa (high pressure). Gas is then regulated down to 1.1 kPa or 2.75 kPa, before being metered and passed to the consumer for domestic use. Natural gas mains are made from a variety of materials: historically cast iron, though more modern mains are made from steel or polyethylene. In some states in the USA natural gas can be supplied by independent natural gas wholesalers/suppliers using existing pipeline owners' infrastructure through Natural Gas Choice programs. LPG (liquefied petroleum gas) typically fuels outdoor and portable grills. Although, compressed natural gas (CNG) is sparsely available for similar applications in the US in rural areas underserved by the existing pipeline system and distribution network of the less expensive and more abundant LPG (liquefied petroleum gas). Transportation CNG is a cleaner and also cheaper alternative to other automobile fuels such as gasoline (petrol). By the end of 2014, there were over 20 million natural gas vehicles worldwide, led by Iran (3.5 million), China (3.3 million), Pakistan (2.8 million), Argentina (2.5 million), India (1.8 million), and Brazil (1.8 million). The energy efficiency is generally equal to that of gasoline engines, but lower compared with modern diesel engines. Gasoline/petrol vehicles converted to run on natural gas suffer because of the low compression ratio of their engines, resulting in a cropping of delivered power while running on natural gas (10–15%). CNG-specific engines, however, use a higher compression ratio due to this fuel's higher octane number of 120–130. Besides use in road vehicles, CNG can also be used in aircraft. Compressed natural gas has been used in some aircraft like the Aviat Aircraft Husky 200 CNG and the Chromarat VX-1 KittyHawk LNG is also being used in aircraft. Russian aircraft manufacturer Tupolev for instance is running a development program to produce LNG- and hydrogen-powered aircraft. The program has been running since the mid-1970s, and seeks to develop LNG and hydrogen variants of the Tu-204 and Tu-334 passenger aircraft, and also the Tu-330 cargo aircraft. Depending on the current market price for jet fuel and LNG, fuel for an LNG-powered aircraft could cost 5,000 rubles (US$100) less per tonne, roughly 60%, with considerable reductions to carbon monoxide, hydrocarbon and nitrogen oxide emissions. The advantages of liquid methane as a jet engine fuel are that it has more specific energy than the standard kerosene mixes do and that its low temperature can help cool the air which the engine compresses for greater volumetric efficiency, in effect replacing an intercooler. Alternatively, it can be used to lower the temperature of the exhaust. Fertilizers Natural gas is a major feedstock for the production of ammonia, via the Haber process, for use in fertilizer production. The development of synthetic nitrogen fertilizer has significantly supported global population growth — it has been estimated that almost half the people on the Earth are currently fed as a result of synthetic nitrogen fertilizer use. Hydrogen Natural gas can be used to produce hydrogen, with one common method being the hydrogen reformer. Hydrogen has many applications: it is a primary feedstock for the chemical industry, a hydrogenating agent, an important commodity for oil refineries, and the fuel source in hydrogen vehicles. Animal and fish feed Protein rich animal and fish feed is produced by feeding natural gas to Methylococcus capsulatus bacteria on commercial scale. Olefins(alkenes) Natural gas components(alkanes) can be converted into olefins(alkenes) or other chemical synthesis. Ethane by oxidative dehydrogenation converts to ethylene, which can be further converted to ethylene oxide, ethylene glycol, acetaldehyde or other olefins. Propane by oxidative hydrogenation converts to propylene or can be oxidized to acrylic acid and acrylonitrile. Other Natural gas is also used in the manufacture of fabrics, glass, steel, plastics, paint, synthetic oil, and other products. Fuel for industrial heating and desiccation processes. Raw material for large-scale fuel production using gas-to-liquid (GTL) process (e.g. to produce sulphur-and aromatic-free diesel with low-emission combustion). Health effects Cooking with natural gas contributes to poor indoor air quality and can lead to severe respiratory diseases such as asthma. Environmental effects Greenhouse effect and natural gas release Natural gas is a growing contributor to climate change. Both the NG itself (specifically methane) and carbon dioxide, which is released when natural gas is burned, are greenhouse gases. Human activity is responsible for about 60% of all methane emissions and for most of the resulting increase in atmospheric methane. Natural gas is intentionally released or is otherwise known to leak during the extraction, storage, transportation, and distribution of fossil fuels. Globally, methane accounts for an estimated 33% of anthropogenic greenhouse gas warming. The decomposition of municipal solid waste (a source of landfill gas) and wastewater account for an additional 18% of such emissions. These estimates include substantial uncertainties which should be reduced in the near future with improved satellite measurements, such as those planned for MethaneSAT. After release to the atmosphere, methane is removed by gradual oxidation to carbon dioxide and water by hydroxyl radicals () formed in the troposphere or stratosphere, giving the overall chemical reaction + 2 → + 2. While the lifetime of atmospheric methane is relatively short when compared to carbon dioxide, with a half-life of about 7 years, it is more efficient at trapping heat in the atmosphere, so that a given quantity of methane has 84 times the global-warming potential of carbon dioxide over a 20-year period and 28 times over a 100-year period. Natural gas is thus a potent greenhouse gas due to the strong radiative forcing of methane in the short term, and the continuing effects of carbon dioxide in the longer term. Targeted efforts to reduce warming quickly by reducing anthropogenic methane emissions is a climate change mitigation strategy supported by the Global Methane Initiative. Greenhouse gas emissions When refined and burned, natural gas can produce 25–30% less carbon dioxide per joule delivered than oil, and 40–45% less than coal. It can also produce potentially fewer toxic pollutants than other hydrocarbon fuels. However, compared to other major fossil fuels, natural gas causes more emissions in relative terms during the production and transportation of the fuel, meaning that the life cycle greenhouse gas emissions are about 50% higher than the direct emissions from the site of consumption. In terms of the warming effect over 100 years, natural gas production and use comprises about one fifth of human greenhouse gas emissions, and this contribution is growing rapidly. Globally, natural gas use emitted about 7.8 billion tons of in 2020 (including flaring), while coal and oil use emitted 14.4 and 12 billion tons, respectively. The IEA estimates the energy sector (oil, natural gas, coal and bioenergy) to be responsible for about 40% of human methane emissions. According to the IPCC Sixth Assessment Report, natural gas consumption grew by 15% between 2015 and 2019, compared to a 5% increase in oil and oil product consumption. The continued financing and construction of new gas pipelines indicates that huge emissions of fossil greenhouse gases could be locked-in for 40 to 50 years into the future. In the U.S. state of Texas alone, five new long-distance gas pipelines have been under construction, with the first entering service in 2019, and the others scheduled to come online during 2020–2022. Installation bans To reduce its greenhouse emissions, the Netherlands is subsidizing a transition away from natural gas for all homes in the country by 2050. In Amsterdam, no new residential gas accounts have been allowed since 2018, and all homes in the city are expected to be converted by 2040 to use the excess heat from adjacent industrial buildings and operations. Some cities in the United States have started prohibiting gas hookups for new houses, with state laws passed and under consideration to either require electrification or prohibit local requirements. New gas appliance hookups are banned in New York State and the Australian Capital Territory. Additionally, the state of Victoria in Australia has implemented a ban on new natural gas hookups starting from January 1, 2024, as part of its gas substitution roadmap. This followed campaigning which resulted in a prohibition on onshore gas exploration and production in Victoria in 2014. This was partially lifted in 2021 but a constitutional ban remains on fracking. The UK government is also experimenting with alternative home heating technologies to meet its climate goals. To preserve their businesses, natural gas utilities in the United States have been lobbying for laws preventing local electrification ordinances, and are promoting renewable natural gas and hydrogen fuel. Other pollutants Although natural gas produces far lower amounts of sulfur dioxide and nitrogen oxides (NOx) than other fossil fuels, from burning natural gas in homes can be a health hazard. Radionuclides Natural gas extraction also produces radioactive isotopes of polonium (Po-210), lead (Pb-210) and radon (Rn-220). Radon is a gas with initial activity from 5 to 200,000 becquerels per cubic meter of gas. It decays rapidly to Pb-210 which can build up as a thin film in gas extraction equipment. Safety concerns The natural gas extraction workforce face unique health and safety challenges. Production Some gas fields yield sour gas containing hydrogen sulfide (), a toxic compound when inhaled. Amine gas treating, an industrial scale process which removes acidic gaseous components, is often used to remove hydrogen sulfide from natural gas. Extraction of natural gas (or oil) leads to decrease in pressure in the reservoir. Such decrease in pressure in turn may result in subsidence — sinking of the ground above. Subsidence may affect ecosystems, waterways, sewer and water supply systems, foundations, and so on. Fracking Releasing natural gas from subsurface porous rock formations may be accomplished by a process called hydraulic fracturing or "fracking". Since the first commercial hydraulic fracturing operation in 1949, approximately one million wells have been hydraulically fractured in the United States. The production of natural gas from hydraulically fractured wells has used the technological developments of directional and horizontal drilling, which improved access to natural gas in tight rock formations. Strong growth in the production of unconventional gas from hydraulically fractured wells occurred between 2000 and 2012. In hydraulic fracturing, well operators force water mixed with a variety of chemicals through the wellbore casing into the rock. The high pressure water breaks up or "fracks" the rock, which releases gas from the rock formation. Sand and other particles are added to the water as a proppant to keep the fractures in the rock open, thus enabling the gas to flow into the casing and then to the surface. Chemicals are added to the fluid to perform such functions as reducing friction and inhibiting corrosion. After the "frack", oil or gas is extracted and 30–70% of the frack fluid, i.e. the mixture of water, chemicals, sand, etc., flows back to the surface. Many gas-bearing formations also contain water, which will flow up the wellbore to the surface along with the gas, in both hydraulically fractured and non-hydraulically fractured wells. This produced water often has a high content of salt and other dissolved minerals that occur in the formation. The volume of water used to hydraulically fracture wells varies according to the hydraulic fracturing technique. In the United States, the average volume of water used per hydraulic fracture has been reported as nearly 7,375 gallons for vertical oil and gas wells prior to 1953, nearly 197,000 gallons for vertical oil and gas wells between 2000 and 2010, and nearly 3 million gallons for horizontal gas wells between 2000 and 2010. Determining which fracking technique is appropriate for well productivity depends largely on the properties of the reservoir rock from which to extract oil or gas. If the rock is characterized by low-permeability – which refers to its ability to let substances, i.e. gas, pass through it, then the rock may be considered a source of tight gas. Fracking for shale gas, which is currently also known as a source of unconventional gas, involves drilling a borehole vertically until it reaches a lateral shale rock formation, at which point the drill turns to follow the rock for hundreds or thousands of feet horizontally. In contrast, conventional oil and gas sources are characterized by higher rock permeability, which naturally enables the flow of oil or gas into the wellbore with less intensive hydraulic fracturing techniques than the production of tight gas has required. The decades in development of drilling technology for conventional and unconventional oil and gas production have not only improved access to natural gas in low-permeability reservoir rocks, but also posed significant adverse impacts on environmental and public health. The US EPA has acknowledged that toxic, carcinogenic chemicals, i.e. benzene and ethylbenzene, have been used as gelling agents in water and chemical mixtures for high volume horizontal fracturing (HVHF). Following the hydraulic fracture in HVHF, the water, chemicals, and frack fluid that return to the well's surface, called flowback or produced water, may contain radioactive materials, heavy metals, natural salts, and hydrocarbons which exist naturally in shale rock formations. Fracking chemicals, radioactive materials, heavy metals, and salts that are removed from the HVHF well by well operators are so difficult to remove from the water they are mixed with, and would so heavily pollute the water cycle, that most of the flowback is either recycled into other fracking operations or injected into deep underground wells, eliminating the water that HVHF required from the hydrologic cycle. Historically low gas prices have delayed the nuclear renaissance, as well as the development of solar thermal energy. Added odor In its native state, natural gas is colorless and almost odorless. In the US, the New London School explosion that occurred in 1937 in Texas caused a push for legislation requiring the addition of an odorant to assist consumers in detecting leaks. An odorizer with an unpleasant smell, such as thiophane or tert-Butylthiol (t-butyl mercaptan) may be added. Situations have occurred in which an odorant cannot be properly detected by an observer with a normal sense of smell despite being detectable by analytical instruments. This is caused by odor masking, when one odor overpowers the sensation of another. As of 2011, the industry is conducting research on the causes of odor masking. Risk of explosion Explosions caused by natural gas leaks occur a few times each year. Individual homes, small businesses and other structures are most frequently affected when an internal leak builds up gas inside the structure. Leaks often result from excavation work, such as when contractors dig and strike pipelines, sometimes without knowing any damage resulted. Frequently, the blast is powerful enough to significantly damage a building but leave it standing. In these cases, the people inside tend to have minor to moderate injuries. Occasionally, the gas can collect in high enough quantities to cause a deadly explosion, destroying one or more buildings in the process. Many building codes now forbid the installation of gas pipes inside cavity walls or below floor boards to mitigate against this risk. Gas usually dissipates readily outdoors, but can sometimes collect in dangerous quantities if flow rates are high enough. However, considering the tens of millions of structures that use the fuel, the individual risk from using natural gas is low. Risk of carbon monoxide inhalation Natural gas heating systems may cause carbon monoxide poisoning if unvented or poorly vented. Improvements in natural gas furnace designs have greatly reduced CO poisoning concerns. Detectors are also available that warn of carbon monoxide or explosive gases such as methane and propane. Energy content, statistics, and pricing Quantities of natural gas are measured in standard cubic meters (cubic meter of gas at temperature and pressure ) or standard cubic feet (cubic foot of gas at temperature 60.0 °F and pressure ), 1 standard cubic meter = 35.301 standard cubic feet. The gross heat of combustion of commercial quality natural gas is around , but this can vary by several percent. This is about 50 to 54 MJ/kg depending on the density. For comparison, the heat of combustion of pure methane is 37.7 MJ per standard cubic metre, or 55.5 MJ/kg. Except in the European Union, the U.S., and Canada, natural gas is sold in gigajoule retail units. LNG (liquefied natural gas) and LPG (liquefied petroleum gas) are traded in metric tonnes (1,000 kg) or million BTU as spot deliveries. Long term natural gas distribution contracts are signed in cubic meters, and LNG contracts are in metric tonnes. The LNG and LPG is transported by specialized transport ships, as the gas is liquified at cryogenic temperatures. The specification of each LNG/LPG cargo will usually contain the energy content, but this information is in general not available to the public. The European Union aimed to cut its gas dependency on Russia by two-thirds in 2022. In August 2015, possibly the largest natural gas discovery in history was made and notified by an Italian gas company ENI. The energy company indicated that it has unearthed a "supergiant" gas field in the Mediterranean Sea covering about . This was named the Zohr gas field and could hold a potential of natural gas. ENI said that the energy is about . The Zohr field was found in the deep waters off the northern coast of Egypt and ENI claims that it will be the largest ever in the Mediterranean and even the world. European Union Gas prices for end users vary greatly across the EU. A single European energy market, one of the key objectives of the EU, should level the prices of gas in all EU member states. Moreover, it would help to resolve supply and global warming issues, as well as strengthen relations with other Mediterranean countries and foster investments in the region. Qatar has been asked by the US to supply emergency gas to the EU in case of supply disruptions in the Russo-Ukrainian crisis. United States In US units, of natural gas produces around . The actual heating value when the water formed does not condense is the net heat of combustion and can be as much as 10% less. In the United States, retail sales are often in units of therms (th); 1 therm = 100,000 BTU. Gas sales to domestic consumers are often in units of 100 standard cubic feet (scf). Gas meters measure the volume of gas used, and this is converted to therms by multiplying the volume by the energy content of the gas used during that period, which varies slightly over time. The typical annual consumption of a single family residence is 1,000 therms or one Residential Customer Equivalent (RCE). Wholesale transactions are generally done in decatherms (Dth), thousand decatherms (MDth), or million decatherms (MMDth). A million decatherms is a trillion BTU, roughly a billion cubic feet of natural gas. The price of natural gas varies greatly depending on location and type of consumer. The typical caloric value of natural gas is roughly 1,000 BTU per cubic foot, depending on gas composition. Natural gas in the United States is traded as a futures contract on the New York Mercantile Exchange. Each contract is for 10,000 million BTU or . Thus, if the price of gas is $10/million BTU on the NYMEX, the contract is worth $100,000. Canada Canada uses metric measure for internal trade of petrochemical products. Consequently, natural gas is sold by the gigajoule (GJ), cubic meter (m3) or thousand cubic meters (E3m3). Distribution infrastructure and meters almost always meter volume (cubic foot or cubic meter). Some jurisdictions, such as Saskatchewan, sell gas by volume only. Other jurisdictions, such as Alberta, sell gas by energy content (GJ). In these areas, almost all meters for residential and small commercial customers measure volume (m3 or ft3), and billing statements include a multiplier to convert the volume to the energy content of the local gas supply. A gigajoule (GJ) is a measure approximately equal to of oil, or or 1 million BTUs of gas. The energy content of gas supply in Canada can vary from depending on gas supply and processing between the wellhead and the customer. Adsorbed natural gas (ANG) Natural gas may be stored by adsorbing it to the porous solids called sorbents. The optimal condition for methane storage is at room temperature and atmospheric pressure. Pressures up to 4 MPa (about 40 times atmospheric pressure) will yield greater storage capacity. The most common sorbent used for ANG is activated carbon (AC), primarily in three forms: Activated Carbon Fiber (ACF), Powdered Activated Carbon (PAC), and activated carbon monolith.
Technology
Energy
null
22145
https://en.wikipedia.org/wiki/Newton%27s%20method
Newton's method
In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valued function. The most basic version starts with a real-valued function , its derivative , and an initial guess for a root of . If satisfies certain assumptions and the initial guess is close, then is a better approximation of the root than . Geometrically, is the x-intercept of the tangent of the graph of at : that is, the improved guess, , is the unique root of the linear approximation of at the initial guess, . The process is repeated as until a sufficiently precise value is reached. The number of correct digits roughly doubles with each step. This algorithm is first in the class of Householder's methods, and was succeeded by Halley's method. The method can also be extended to complex functions and to systems of equations. Description The idea is to start with an initial guess, then to approximate the function by its tangent line, and finally to compute the -intercept of this tangent line. This -intercept will typically be a better approximation to the original function's root than the first guess, and the method can be iterated. If the tangent line to the curve at intercepts the -axis at then the slope is Solving for gives We start the process with some arbitrary initial value . (The closer to the zero, the better. But, in the absence of any intuition about where the zero might lie, a "guess and check" method might narrow the possibilities to a reasonably small interval by appealing to the intermediate value theorem.) The method will usually converge, provided this initial guess is close enough to the unknown zero, and that . Furthermore, for a zero of multiplicity 1, the convergence is at least quadratic (see Rate of convergence) in a neighbourhood of the zero, which intuitively means that the number of correct digits roughly doubles in every step. More details can be found in below. Householder's methods are similar but have higher order for even faster convergence. However, the extra computations required for each step can slow down the overall performance relative to Newton's method, particularly if or its derivatives are computationally expensive to evaluate. History In the Old Babylonian period (19th–16th century BCE), the side of a square of known area could be effectively approximated, and this is conjectured to have been done using a special case of Newton's method, described algebraically below, by iteratively improving an initial estimate; an equivalent method can be found in Heron of Alexandria's Metrica (1st–2nd century CE), so is often called Heron's method. Jamshīd al-Kāshī used a method to solve to find roots of , a method that was algebraically equivalent to Newton's method, and in which a similar method was found in Trigonometria Britannica, published by Henry Briggs in 1633. The method first appeared roughly in Isaac Newton's work in De analysi per aequationes numero terminorum infinitas (written in 1669, published in 1711 by William Jones) and in De metodis fluxionum et serierum infinitarum (written in 1671, translated and published as Method of Fluxions in 1736 by John Colson). However, while Newton gave the basic ideas, his method differs from the modern method given above. He applied the method only to polynomials, starting with an initial root estimate and extracting a sequence of error corrections. He used each correction to rewrite the polynomial in terms of the remaining error, and then solved for a new correction by neglecting higher-degree terms. He did not explicitly connect the method with derivatives or present a general formula. Newton applied this method to both numerical and algebraic problems, producing Taylor series in the latter case. Newton may have derived his method from a similar, less precise method by mathematician François Viète, however, the two methods are not the same. The essence of Viète's own method can be found in the work of the Persian mathematician Sharaf al-Din al-Tusi. The Japanese mathematician Seki Kōwa used a form of Newton's method in the 1680s to solve single-variable equations, though the connection with calculus was missing. Newton's method was first published in 1685 in A Treatise of Algebra both Historical and Practical by John Wallis. In 1690, Joseph Raphson published a simplified description in Analysis aequationum universalis. Raphson also applied the method only to polynomials, but he avoided Newton's tedious rewriting process by extracting each successive correction from the original polynomial. This allowed him to derive a reusable iterative expression for each problem. Finally, in 1740, Thomas Simpson described Newton's method as an iterative method for solving general nonlinear equations using calculus, essentially giving the description above. In the same publication, Simpson also gives the generalization to systems of two equations and notes that Newton's method can be used for solving optimization problems by setting the gradient to zero. Arthur Cayley in 1879 in The Newton–Fourier imaginary problem was the first to notice the difficulties in generalizing Newton's method to complex roots of polynomials with degree greater than 2 and complex initial values. This opened the way to the study of the theory of iterations of rational functions. Practical considerations Newton's method is a powerful technique—in general the convergence is quadratic: as the method converges on the root, the difference between the root and the approximation is squared (the number of accurate digits roughly doubles) at each step. However, there are some difficulties with the method. Difficulty in calculating the derivative of a function Newton's method requires that the derivative can be calculated directly. An analytical expression for the derivative may not be easily obtainable or could be expensive to evaluate. In these situations, it may be appropriate to approximate the derivative by using the slope of a line through two nearby points on the function. Using this approximation would result in something like the secant method whose convergence is slower than that of Newton's method. Failure of the method to converge to the root It is important to review the proof of quadratic convergence of Newton's method before implementing it. Specifically, one should review the assumptions made in the proof. For situations where the method fails to converge, it is because the assumptions made in this proof are not met. For example, in some cases, if the first derivative is not well behaved in the neighborhood of a particular root, then it is possible that Newton's method will fail to converge no matter where the initialization is set. In some cases, Newton's method can be stabilized by using successive over-relaxation, or the speed of convergence can be increased by using the same method. In a robust implementation of Newton's method, it is common to place limits on the number of iterations, bound the solution to an interval known to contain the root, and combine the method with a more robust root finding method. Slow convergence for roots of multiplicity greater than 1 If the root being sought has multiplicity greater than one, the convergence rate is merely linear (errors reduced by a constant factor at each step) unless special steps are taken. When there are two or more roots that are close together then it may take many iterations before the iterates get close enough to one of them for the quadratic convergence to be apparent. However, if the multiplicity of the root is known, the following modified algorithm preserves the quadratic convergence rate: This is equivalent to using successive over-relaxation. On the other hand, if the multiplicity of the root is not known, it is possible to estimate after carrying out one or two iterations, and then use that value to increase the rate of convergence. If the multiplicity of the root is finite then will have a root at the same location with multiplicity 1. Applying Newton's method to find the root of recovers quadratic convergence in many cases although it generally involves the second derivative of . In a particularly simple case, if then and Newton's method finds the root in a single iteration with Analysis Suppose that the function has a zero at , i.e., , and is differentiable in a neighborhood of . If is continuously differentiable and its derivative is nonzero at , then there exists a neighborhood of such that for all starting values in that neighborhood, the sequence will converge to . If is continuously differentiable, its derivative is nonzero at , and it has a second derivative at , then the convergence is quadratic or faster. If the second derivative is not 0 at then the convergence is merely quadratic. If the third derivative exists and is bounded in a neighborhood of , then: where If the derivative is 0 at , then the convergence is usually only linear. Specifically, if is twice continuously differentiable, and , then there exists a neighborhood of such that, for all starting values in that neighborhood, the sequence of iterates converges linearly, with rate . Alternatively, if and for ,  in a neighborhood of , being a zero of multiplicity , and if , then there exists a neighborhood of such that, for all starting values in that neighborhood, the sequence of iterates converges linearly. However, even linear convergence is not guaranteed in pathological situations. In practice, these results are local, and the neighborhood of convergence is not known in advance. But there are also some results on global convergence: for instance, given a right neighborhood of , if is twice differentiable in and if , in , then, for each in the sequence is monotonically decreasing to . Proof of quadratic convergence for Newton's iterative method According to Taylor's theorem, any function which has a continuous second derivative can be represented by an expansion about a point that is close to a root of . Suppose this root is . Then the expansion of about is: where the Lagrange form of the Taylor series expansion remainder is where is in between and . Since is the root, () becomes: Dividing equation () by and rearranging gives Remembering that is defined by one finds that That is, Taking the absolute value of both sides gives Equation () shows that the order of convergence is at least quadratic if the following conditions are satisfied: ; for all , where is the interval ; is continuous, for all ; where is given by If these conditions hold, Fourier conditions Suppose that is a concave function on an interval, which is strictly increasing. If it is negative at the left endpoint and positive at the right endpoint, the intermediate value theorem guarantees that there is a zero of somewhere in the interval. From geometrical principles, it can be seen that the Newton iteration starting at the left endpoint is monotonically increasing and convergent, necessarily to . Joseph Fourier introduced a modification of Newton's method starting at the right endpoint: This sequence is monotonically decreasing and convergent. By passing to the limit in this definition, it can be seen that the limit of must also be the zero . So, in the case of a concave increasing function with a zero, initialization is largely irrelevant. Newton iteration starting anywhere left of the zero will converge, as will Fourier's modified Newton iteration starting anywhere right of the zero. The accuracy at any step of the iteration can be determined directly from the difference between the location of the iteration from the left and the location of the iteration from the right. If is twice continuously differentiable, it can be proved using Taylor's theorem that showing that this difference in locations converges quadratically to zero. All of the above can be extended to systems of equations in multiple variables, although in that context the relevant concepts of monotonicity and concavity are more subtle to formulate. In the case of single equations in a single variable, the above monotonic convergence of Newton's method can also be generalized to replace concavity by positivity or negativity conditions on an arbitrary higher-order derivative of . However, in this generalization, Newton's iteration is modified so as to be based on Taylor polynomials rather than the tangent line. In the case of concavity, this modification coincides with the standard Newton method. Examples Use of Newton's method to compute square roots Newton's method is one of many known methods of computing square roots. Given a positive number , the problem of finding a number such that is equivalent to finding a root of the function . The Newton iteration defined by this function is given by This happens to coincide with the "Babylonian" method of finding square roots, which consists of replacing an approximate root by the arithmetic mean of and . By performing this iteration, it is possible to evaluate a square root to any desired accuracy by only using the basic arithmetic operations. The following three tables show examples of the result of this computation for finding the square root of 612, with the iteration initialized at the values of 1, 10, and −20. Each row in a "" column is obtained by applying the preceding formula to the entry above it, for instance The correct digits are underlined. It is seen that with only a few iterations one can obtain a solution accurate to many decimal places. The first table shows that this is true even if the Newton iteration were initialized by the very inaccurate guess of . When computing any nonzero square root, the first derivative of must be nonzero at the root, and that is a smooth function. So, even before any computation, it is known that any convergent Newton iteration has a quadratic rate of convergence. This is reflected in the above tables by the fact that once a Newton iterate gets close to the root, the number of correct digits approximately doubles with each iteration. Solution of using Newton's method Consider the problem of finding the positive number with . We can rephrase that as finding the zero of . We have . Since for all and for , we know that our solution lies between 0 and 1. A starting value of 0 will lead to an undefined result which illustrates the importance of using a starting point close to the solution. For example, with an initial guess , the sequence given by Newton's method is: The correct digits are underlined in the above example. In particular, is correct to 12 decimal places. We see that the number of correct digits after the decimal point increases from 2 (for ) to 5 and 10, illustrating the quadratic convergence. Slow convergence The function has a root at 0. Since is continuously differentiable at its root, the theory guarantees that Newton's method as initialized sufficiently close to the root will converge. However, since the derivative is zero at the root, quadratic convergence is not ensured by the theory. In this particular example, the Newton iteration is given by It is visible from this that Newton's method could be initialized anywhere and converge to zero, but at only a linear rate. If initialized at 1, dozens of iterations would be required before ten digits of accuracy are achieved. The function also has a root at 0, where it is continuously differentiable. Although the first derivative is nonzero at the root, the second derivative is nonexistent there, so that quadratic convergence cannot be guaranteed. In fact the Newton iteration is given by From this, it can be seen that the rate of convergence is superlinear but subquadratic. This can be seen in the following tables, the left of which shows Newton's method applied to the above and the right of which shows Newton's method applied to . The quadratic convergence in iteration shown on the right is illustrated by the orders of magnitude in the distance from the iterate to the true root (0,1,2,3,5,10,20,39,...) being approximately doubled from row to row. While the convergence on the left is superlinear, the order of magnitude is only multiplied by about 4/3 from row to row (0,1,2,4,5,7,10,13,...). The rate of convergence is distinguished from the number of iterations required to reach a given accuracy. For example, the function has a root at 1. Since and is smooth, it is known that any Newton iteration convergent to 1 will converge quadratically. However, if initialized at 0.5, the first few iterates of Newton's method are approximately 26214, 24904, 23658, 22476, decreasing slowly, with only the 200th iterate being 1.0371. The following iterates are 1.0103, 1.00093, 1.0000082, and 1.00000000065, illustrating quadratic convergence. This highlights that quadratic convergence of a Newton iteration does not mean that only few iterates are required; this only applies once the sequence of iterates is sufficiently close to the root. Convergence dependent on initialization The function has a root at 0. The Newton iteration is given by From this, it can be seen that there are three possible phenomena for a Newton iteration. If initialized strictly between , the Newton iteration will converge (super-)quadratically to 0; if initialized exactly at or , the Newton iteration will oscillate endlessly between ; if initialized anywhere else, the Newton iteration will diverge. This same trichotomy occurs for . In cases where the function in question has multiple roots, it can be difficult to control, via choice of initialization, which root (if any) is identified by Newton's method. For example, the function has roots at −1, 0, 1, and 3. If initialized at −1.488, the Newton iteration converges to 0; if initialized at −1.487, it diverges to ; if initialized at −1.486, it converges to −1; if initialized at −1.485, it diverges to ; if initialized at −1.4843, it converges to 3; if initialized at −1.484, it converges to . This kind of subtle dependence on initialization is not uncommon; it is frequently studied in the complex plane in the form of the Newton fractal. Divergence even when initialization is close to the root Consider the problem of finding a root of . The Newton iteration is Unless Newton's method is initialized at the exact root 0, it is seen that the sequence of iterates will fail to converge. For example, even if initialized at the reasonably accurate guess of 0.001, the first several iterates are −0.002, 0.004, −0.008, 0.016, reaching 1048.58, −2097.15, ... by the 20th iterate. This failure of convergence is not contradicted by the analytic theory, since in this case is not differentiable at its root. In the above example, failure of convergence is reflected by the failure of to get closer to zero as increases, as well as by the fact that successive iterates are growing further and further apart. However, the function also has a root at 0. The Newton iteration is given by In this example, where again is not differentiable at the root, any Newton iteration not starting exactly at the root will diverge, but with both and converging to zero. This is seen in the following table showing the iterates with initialization 1: Although the convergence of in this case is not very rapid, it can be proved from the iteration formula. This example highlights the possibility that a stopping criterion for Newton's method based only on the smallness of and might falsely identify a root. Oscillatory behavior It is easy to find situations for which Newton's method oscillates endlessly between two distinct values. For example, for Newton's method as applied to a function to oscillate between 0 and 1, it is only necessary that the tangent line to at 0 intersects the -axis at 1 and that the tangent line to at 1 intersects the -axis at 0. This is the case, for example, if . For this function, it is even the case that Newton's iteration as initialized sufficiently close to 0 or 1 will asymptotically oscillate between these values. For example, Newton's method as initialized at 0.99 yields iterates 0.99, −0.06317, 1.00628, 0.03651, 1.00196, 0.01162, 1.00020, 0.00120, 1.000002, and so on. This behavior is present despite the presence of a root of approximately equal to −1.76929. Undefinedness of Newton's method In some cases, it is not even possible to perform the Newton iteration. For example, if , then the Newton iteration is defined by So Newton's method cannot be initialized at 0, since this would make undefined. Geometrically, this is because the tangent line to at 0 is horizontal (i.e. ), never intersecting the -axis. Even if the initialization is selected so that the Newton iteration can begin, the same phenomenon can block the iteration from being indefinitely continued. If has an incomplete domain, it is possible for Newton's method to send the iterates outside of the domain, so that it is impossible to continue the iteration. For example, the natural logarithm function has a root at 1, and is defined only for positive . Newton's iteration in this case is given by So if the iteration is initialized at , the next iterate is 0; if the iteration is initialized at a value larger than , then the next iterate is negative. In either case, the method cannot be continued. Multidimensional formulations Systems of equations variables, functions One may also use Newton's method to solve systems of equations, which amounts to finding the (simultaneous) zeroes of continuously differentiable functions This is equivalent to finding the zeroes of a single vector-valued function In the formulation given above, the scalars are replaced by vectors and instead of dividing the function by its derivative one instead has to left multiply the function by the inverse of its Jacobian matrix . This results in the expression or, by solving the system of linear equations for the unknown . variables, equations, with The -dimensional variant of Newton's method can be used to solve systems of greater than (nonlinear) equations as well if the algorithm uses the generalized inverse of the non-square Jacobian matrix instead of the inverse of . If the nonlinear system has no solution, the method attempts to find a solution in the non-linear least squares sense. See Gauss–Newton algorithm for more information. Example For example, the following set of equations needs to be solved for vector of points given the vector of known values the function vector, and Jacobian Matrix, for iteration k, and the vector of known values, are defined below. Note that could have been rewritten to absorb and thus eliminate from the equations. The equation to solve for each iteration are and The iterations should be repeated until where is a value acceptably small enough to meet application requirements. If vector is initially chosen to be that is, and and is chosen to be 1., then the example converges after four iterations to a value of Iterations The following iterations were made during the course of the solution. {| class="wikitable" |+ Converging iteration sequence |- style="vertical-align:bottom;" ! Step ! Variable ! |- |rowspan="2"; align="center";| |align="right";| | |- |align="right";| | |- |colspan="3" ; style="background:white;"| |- |rowspan="4"; align="center";| |align="right";| | |- |align="right";| | |- |align="right";| | |- |align="right";| | |- |colspan="3" ; style="background:white;"| |- |rowspan="4"; align="center";| |align="right";| | |- |align="right";| | |- |align="right";| | |- |align="right";| | |- |colspan="3" ; style="background:white;"| |- |rowspan="4"; align="center";| |align="right";| | |- |align="right";| | |- |align="right";| | |- |align="right";| | |- |colspan="3" ; style="background:white;"| |- |rowspan="4"; align="center";| |align="right";| | |- |align="right";| | |- |align="right";| | |- |align="right";| | |} Complex functions When dealing with complex functions, Newton's method can be directly applied to find their zeroes. Each zero has a basin of attraction in the complex plane, the set of all starting values that cause the method to converge to that particular zero. These sets can be mapped as in the image shown. For many complex functions, the boundaries of the basins of attraction are fractals. In some cases there are regions in the complex plane which are not in any of these basins of attraction, meaning the iterates do not converge. For example, if one uses a real initial condition to seek a root of , all subsequent iterates will be real numbers and so the iterations cannot converge to either root, since both roots are non-real. In this case almost all real initial conditions lead to chaotic behavior, while some initial conditions iterate either to infinity or to repeating cycles of any finite length. Curt McMullen has shown that for any possible purely iterative algorithm similar to Newton's method, the algorithm will diverge on some open regions of the complex plane when applied to some polynomial of degree 4 or higher. However, McMullen gave a generally convergent algorithm for polynomials of degree 3. Also, for any polynomial, Hubbard, Schleicher, and Sutherland gave a method for selecting a set of initial points such that Newton's method will certainly converge at one of them at least. In a Banach space Another generalization is Newton's method to find a root of a functional defined in a Banach space. In this case the formulation is where is the Fréchet derivative computed at . One needs the Fréchet derivative to be boundedly invertible at each in order for the method to be applicable. A condition for existence of and convergence to a root is given by the Newton–Kantorovich theorem. Nash–Moser iteration In the 1950s, John Nash developed a version of the Newton's method to apply to the problem of constructing isometric embeddings of general Riemannian manifolds in Euclidean space. The loss of derivatives problem, present in this context, made the standard Newton iteration inapplicable, since it could not be continued indefinitely (much less converge). Nash's solution involved the introduction of smoothing operators into the iteration. He was able to prove the convergence of his smoothed Newton method, for the purpose of proving an implicit function theorem for isometric embeddings. In the 1960s, Jürgen Moser showed that Nash's methods were flexible enough to apply to problems beyond isometric embedding, particularly in celestial mechanics. Since then, a number of mathematicians, including Mikhael Gromov and Richard Hamilton, have found generalized abstract versions of the Nash–Moser theory. In Hamilton's formulation, the Nash–Moser theorem forms a generalization of the Banach space Newton method which takes place in certain Fréchet spaces. Modifications Quasi-Newton methods When the Jacobian is unavailable or too expensive to compute at every iteration, a quasi-Newton method can be used. Chebyshev's third-order method Over -adic numbers In -adic analysis, the standard method to show a polynomial equation in one variable has a -adic root is Hensel's lemma, which uses the recursion from Newton's method on the -adic numbers. Because of the more stable behavior of addition and multiplication in the -adic numbers compared to the real numbers (specifically, the unit ball in the -adics is a ring), convergence in Hensel's lemma can be guaranteed under much simpler hypotheses than in the classical Newton's method on the real line. -analog Newton's method can be generalized with the -analog of the usual derivative. Modified Newton methods Maehly's procedure A nonlinear equation has multiple solutions in general. But if the initial value is not appropriate, Newton's method may not converge to the desired solution or may converge to the same solution found earlier. When we have already found solutions of , then the next root can be found by applying Newton's method to the next equation: This method is applied to obtain zeros of the Bessel function of the second kind. Hirano's modified Newton method Hirano's modified Newton method is a modification conserving the convergence of Newton method and avoiding unstableness. It is developed to solve complex polynomials. Interval Newton's method Combining Newton's method with interval arithmetic is very useful in some contexts. This provides a stopping criterion that is more reliable than the usual ones (which are a small value of the function or a small variation of the variable between consecutive iterations). Also, this may detect cases where Newton's method converges theoretically but diverges numerically because of an insufficient floating-point precision (this is typically the case for polynomials of large degree, where a very small change of the variable may change dramatically the value of the function; see Wilkinson's polynomial). Consider , where is a real interval, and suppose that we have an interval extension of , meaning that takes as input an interval and outputs an interval such that: We also assume that , so in particular has at most one root in . We then define the interval Newton operator by: where . Note that the hypothesis on implies that is well defined and is an interval (see interval arithmetic for further details on interval operations). This naturally leads to the following sequence: The mean value theorem ensures that if there is a root of in , then it is also in . Moreover, the hypothesis on ensures that is at most half the size of when is the midpoint of , so this sequence converges towards , where is the root of in . If strictly contains 0, the use of extended interval division produces a union of two intervals for ; multiple roots are therefore automatically separated and bounded. Applications Minimization and maximization problems Newton's method can be used to find a minimum or maximum of a function . The derivative is zero at a minimum or maximum, so local minima and maxima can be found by applying Newton's method to the derivative. The iteration becomes: Multiplicative inverses of numbers and power series An important application is Newton–Raphson division, which can be used to quickly find the reciprocal of a number , using only multiplication and subtraction, that is to say the number such that . We can rephrase that as finding the zero of . We have . Newton's iteration is Therefore, Newton's iteration needs only two multiplications and one subtraction. This method is also very efficient to compute the multiplicative inverse of a power series. Solving transcendental equations Many transcendental equations can be solved up to an arbitrary precision by using Newton's method. For example, finding the cumulative probability density function, such as a Normal distribution to fit a known probability generally involves integral functions with no known means to solve in closed form. However, computing the derivatives needed to solve them numerically with Newton's method is generally known, making numerical solutions possible. For an example, see the numerical solution to the inverse Normal cumulative distribution. Numerical verification for solutions of nonlinear equations A numerical verification for solutions of nonlinear equations has been established by using Newton's method multiple times and forming a set of solution candidates. Code The following is an example of a possible implementation of Newton's method in the Python (version 3.x) programming language for finding a root of a function f which has derivative f_prime. The initial guess will be and the function will be so that . Each new iteration of Newton's method will be denoted by x1. We will check during the computation whether the denominator (yprime) becomes too small (smaller than epsilon), which would be the case if , since otherwise a large amount of error could be introduced. def f(x): return x**2 - 2 # f(x) = x^2 - 2 def f_prime(x): return 2*x # f'(x) = 2x def newtons_method(x0, f, f_prime, tolerance, epsilon, max_iterations): """Newton's method Args: x0: The initial guess f: The function whose root we are trying to find f_prime: The derivative of the function tolerance: Stop when iterations change by less than this epsilon: Do not divide by a number smaller than this max_iterations: The maximum number of iterations to compute """ for _ in range(max_iterations): y = f(x0) yprime = f_prime(x0) if abs(yprime) < epsilon: # Give up if the denominator is too small break x1 = x0 - y / yprime # Do Newton's computation if abs(x1 - x0) <= tolerance: # Stop when the result is within the desired tolerance return x1 # x1 is a solution within tolerance and maximum number of iterations x0 = x1 # Update x0 to start the process again return None # Newton's method did not converge
Mathematics
Real analysis
null
22151
https://en.wikipedia.org/wiki/Nuclear%20reactor
Nuclear reactor
A nuclear reactor is a device used to initiate and control a fission nuclear chain reaction. They are used for commercial electricity, marine propulsion, weapons production and research. When a fissile nucleus, usually uranium-235 or plutonium-239, absorbs a neutron, it splits into lighter nuclei, releasing energy, gamma radiation, and free neutrons, which can induce further fission in a self-sustaining chain reaction. Reactors stabilize this with systems of active and passive control, varying the presence of neutron absorbers and moderators in the core, maintaining criticality with delayed neutrons. Fuel efficiency is exceptionally high;low-enriched uranium has an energy density 120,000 times higher than coal. Following the discovery of nuclear fission in 1938, many countries initiated military nuclear research programs. Early subcritical "atomic piles" sought to allow research on fission and neutronics. The American Manhattan Project made the vast majority of early breakthroughs. In 1942, the first artificial critical nuclear reactor, Chicago Pile-1, was built at the University of Chicago, by a team led by Enrico Fermi. From 1944, with the goal of weapons-grade plutonium production for fission bombs, the first large-scale reactors were operated at the American Hanford Site. The pressurized water reactor design, used in over 70% of current commercial reactors, was developed by the US Navy for submarine propulsion, beginning with the S1W in 1953. In 1954, nuclear grid electricity production began with the Soviet Obninsk AM-1 reactor. Heat from nuclear fission is passed to a working fluid coolant. In commercial reactors, this drives turbines connected to electrical generator shafts. The heat can also be used for district heating, and industrial applications including desalination and hydrogen production. Some reactors are used to produce isotopes for medical and industrial use. Reactors pose a nuclear proliferation risk as they can be configured to produce plutonium and tritium for nuclear weapons. Spent fuel can be reprocessed, reducing nuclear waste and recovering some reactor-usable MOX fuel. Reprocessing is used in Europe and Asia, but due to proliferation concern, the United States does not engage in or encourage reprocessing. Reactor accidents have been caused by combinations of design and operator failure. The International Nuclear Event Scale classifies Levels 1 to 7 of radioactive material released to the environment. The 1979 Three Mile Island accident, at Level 5, and the 1986 Chernobyl disaster and 2011 Fukushima disaster, both at Level 7, all had major effects on the nuclear industry and anti-nuclear movement. , there are 417 commercial reactors, 226 research reactors, and over 160 ships were powered with over 200 marine propulsion reactors in operation globally. Commercial reactors provide 9% of the global electricity supply, compared to 30% from renewables, together comprising low-carbon electricity. The US Department of Energy classes reactors into generations, with the majority of the global fleet being Generation II reactors constructed from the 1960s to 1990s, and Generation IV reactors currently in development. Reactors can also be grouped by the choices of coolant and moderator. Almost 90% of global nuclear energy comes from pressurized water reactors and boiling water reactors, which use water as a coolant and moderator. Other designs include heavy water reactors, gas-cooled reactors, and fast breeder reactors, variously optimizing efficiency, safety, and fuel type, enrichment, and burnup. Small modular reactors are also an area of current development. Operation Just as conventional thermal power stations generate electricity by harnessing the thermal energy released from burning fossil fuels, nuclear reactors convert the energy released by controlled nuclear fission into thermal energy for further conversion to mechanical or electrical forms. Fission When a large fissile atomic nucleus such as uranium-235, uranium-233, or plutonium-239 absorbs a neutron, it may undergo nuclear fission. The heavy nucleus splits into two or more lighter nuclei, (the fission products), releasing kinetic energy, gamma radiation, and free neutrons. A portion of these neutrons may be absorbed by other fissile atoms and trigger further fission events, which release more neutrons, and so on. This is known as a nuclear chain reaction. To control such a nuclear chain reaction, control rods containing neutron poisons and neutron moderators are able to change the portion of neutrons that will go on to cause more fission. Nuclear reactors generally have automatic and manual systems to shut the fission reaction down if monitoring or instrumentation detects unsafe conditions. Heat generation The reactor core generates heat in a number of ways: The kinetic energy of fission products is converted to thermal energy when these nuclei collide with nearby atoms. The reactor absorbs some of the gamma rays produced during fission and converts their energy into heat. Heat is produced by the radioactive decay of fission products and materials that have been activated by neutron absorption. This decay heat source will remain for some time even after the reactor is shut down. A kilogram of uranium-235 (U-235) converted via nuclear processes releases approximately three million times more energy than a kilogram of coal burned conventionally (7.2 × 1013 joules per kilogram of uranium-235 versus 2.4 × 107 joules per kilogram of coal). The fission of one kilogram of uranium-235 releases about 19 billion kilocalories, so the energy released by 1 kg of uranium-235 corresponds to that released by burning 2.7 million kg of coal. Cooling A nuclear reactor coolant – usually water but sometimes a gas or a liquid metal (like liquid sodium or lead) or molten salt – is circulated past the reactor core to absorb the heat that it generates. The heat is carried away from the reactor and is then used to generate steam. Most reactor systems employ a cooling system that is physically separated from the water that will be boiled to produce pressurized steam for the turbines, like the pressurized water reactor. However, in some reactors the water for the steam turbines is boiled directly by the reactor core; for example the boiling water reactor. Reactivity control The rate of fission reactions within a reactor core can be adjusted by controlling the quantity of neutrons that are able to induce further fission events. Nuclear reactors typically employ several methods of neutron control to adjust the reactor's power output. Some of these methods arise naturally from the physics of radioactive decay and are simply accounted for during the reactor's operation, while others are mechanisms engineered into the reactor design for a distinct purpose. The fastest method for adjusting levels of fission-inducing neutrons in a reactor is via movement of the control rods. Control rods are made of so-called neutron poisons and therefore absorb neutrons. When a control rod is inserted deeper into the reactor, it absorbs more neutrons than the material it displaces – often the moderator. This action results in fewer neutrons available to cause fission and reduces the reactor's power output. Conversely, extracting the control rod will result in an increase in the rate of fission events and an increase in power. The physics of radioactive decay also affects neutron populations in a reactor. One such process is delayed neutron emission by a number of neutron-rich fission isotopes. These delayed neutrons account for about 0.65% of the total neutrons produced in fission, with the remainder (termed "prompt neutrons") released immediately upon fission. The fission products which produce delayed neutrons have half-lives for their decay by neutron emission that range from milliseconds to as long as several minutes, and so considerable time is required to determine exactly when a reactor reaches the critical point. Keeping the reactor in the zone of chain reactivity where delayed neutrons are necessary to achieve a critical mass state allows mechanical devices or human operators to control a chain reaction in "real time"; otherwise the time between achievement of criticality and nuclear meltdown as a result of an exponential power surge from the normal nuclear chain reaction, would be too short to allow for intervention. This last stage, where delayed neutrons are no longer required to maintain criticality, is known as the prompt critical point. There is a scale for describing criticality in numerical form, in which bare criticality is known as zero dollars and the prompt critical point is one dollar, and other points in the process interpolated in cents. In some reactors, the coolant also acts as a neutron moderator. A moderator increases the power of the reactor by causing the fast neutrons that are released from fission to lose energy and become thermal neutrons. Thermal neutrons are more likely than fast neutrons to cause fission. If the coolant is a moderator, then temperature changes can affect the density of the coolant/moderator and therefore change power output. A higher temperature coolant would be less dense, and therefore a less effective moderator. In other reactors, the coolant acts as a poison by absorbing neutrons in the same way that the control rods do. In these reactors, power output can be increased by heating the coolant, which makes it a less dense poison. Nuclear reactors generally have automatic and manual systems to scram the reactor in an emergency shut down. These systems insert large amounts of poison (often boron in the form of boric acid) into the reactor to shut the fission reaction down if unsafe conditions are detected or anticipated. Most types of reactors are sensitive to a process variously known as xenon poisoning, or the iodine pit. The common fission product Xenon-135 produced in the fission process acts as a neutron poison that absorbs neutrons and therefore tends to shut the reactor down. Xenon-135 accumulation can be controlled by keeping power levels high enough to destroy it by neutron absorption as fast as it is produced. Fission also produces iodine-135, which in turn decays (with a half-life of 6.57 hours) to new xenon-135. When the reactor is shut down, iodine-135 continues to decay to xenon-135, making restarting the reactor more difficult for a day or two, as the xenon-135 decays into cesium-135, which is not nearly as poisonous as xenon-135, with a half-life of 9.2 hours. This temporary state is the "iodine pit." If the reactor has sufficient extra reactivity capacity, it can be restarted. As the extra xenon-135 is transmuted to xenon-136, which is much less a neutron poison, within a few hours the reactor experiences a "xenon burnoff (power) transient". Control rods must be further inserted to replace the neutron absorption of the lost xenon-135. Failure to properly follow such a procedure was a key step in the Chernobyl disaster. Reactors used in nuclear marine propulsion (especially nuclear submarines) often cannot be run at continuous power around the clock in the same way that land-based power reactors are normally run, and in addition often need to have a very long core life without refueling. For this reason many designs use highly enriched uranium but incorporate burnable neutron poison in the fuel rods. This allows the reactor to be constructed with an excess of fissionable material, which is nevertheless made relatively safe early in the reactor's fuel burn cycle by the presence of the neutron-absorbing material which is later replaced by normally produced long-lived neutron poisons (far longer-lived than xenon-135) which gradually accumulate over the fuel load's operating life. Electrical power generation The energy released in the fission process generates heat, some of which can be converted into usable energy. A common method of harnessing this thermal energy is to use it to boil water to produce pressurized steam which will then drive a steam turbine that turns an alternator and generates electricity. Life-times Modern nuclear power plants are typically designed for a lifetime of 60 years, while older reactors were built with a planned typical lifetime of 30–40 years, though many of those have received renovations and life extensions of 15–20 years. Some believe nuclear power plants can operate for as long as 80 years or longer with proper maintenance and management. While most components of a nuclear power plant, such as steam generators, are replaced when they reach the end of their useful lifetime, the overall lifetime of the power plant is limited by the life of components that cannot be replaced when aged by wear and neutron embrittlement, such as the reactor pressure vessel. At the end of their planned life span, plants may get an extension of the operating license for some 20 years and in the US even a "subsequent license renewal" (SLR) for an additional 20 years. Even when a license is extended, it does not guarantee the reactor will continue to operate, particularly in the face of safety concerns or incident. Many reactors are closed long before their license or design life expired and are decommissioned. The costs for replacements or improvements required for continued safe operation may be so high that they are not cost-effective. Or they may be shut down due to technical failure. Other ones have been shut down because the area was contaminated, like Fukushima, Three Mile Island, Sellafield, and Chernobyl. The British branch of the French concern EDF Energy, for example, extended the operating lives of its Advanced Gas-cooled Reactors (AGR) with only between 3 and 10 years. All seven AGR plants were expected to be shut down in 2022 and in decommissioning by 2028. Hinkley Point B was extended from 40 to 46 years, and closed. The same happened with Hunterston B, also after 46 years. An increasing number of reactors is reaching or crossing their design lifetimes of 30 or 40 years. In 2014, Greenpeace warned that the lifetime extension of ageing nuclear power plants amounts to entering a new era of risk. It estimated the current European nuclear liability coverage in average to be too low by a factor of between 100 and 1,000 to cover the likely costs, while at the same time, the likelihood of a serious accident happening in Europe continues to increase as the reactor fleet grows older. Early reactors The neutron was discovered in 1932 by British physicist James Chadwick. The concept of a nuclear chain reaction brought about by nuclear reactions mediated by neutrons was first realized shortly thereafter, by Hungarian scientist Leó Szilárd, in 1933. He filed a patent for his idea of a simple reactor the following year while working at the Admiralty in London, England. However, Szilárd's idea did not incorporate the idea of nuclear fission as a neutron source, since that process was not yet discovered. Szilárd's ideas for nuclear reactors using neutron-mediated nuclear chain reactions in light elements proved unworkable. Inspiration for a new type of reactor using uranium came from the discovery by Otto Hahn, Lise Meitner, and Fritz Strassmann in 1938 that bombardment of uranium with neutrons (provided by an alpha-on-beryllium fusion reaction, a "neutron howitzer") produced a barium residue, which they reasoned was created by fission of the uranium nuclei. In their second publication on nuclear fission in February 1939, Hahn and Strassmann predicted the existence and liberation of additional neutrons during the fission process, opening the possibility of a nuclear chain reaction. Subsequent studies in early 1939 (one of them by Szilárd and Fermi), revealed that several neutrons were indeed released during fission, making available the opportunity for the nuclear chain reaction that Szilárd had envisioned six years previously. On 2 August 1939, Albert Einstein signed a letter to President Franklin D. Roosevelt (written by Szilárd) suggesting that the discovery of uranium's fission could lead to the development of "extremely powerful bombs of a new type", giving impetus to the study of reactors and fission. Szilárd and Einstein knew each other well and had worked together years previously, but Einstein had never thought about this possibility for nuclear energy until Szilard reported it to him, at the beginning of his quest to produce the Einstein-Szilárd letter to alert the U.S. government. Shortly after, Nazi Germany invaded Poland in 1939, starting World War II in Europe. The U.S. was not yet officially at war, but in October, when the Einstein-Szilárd letter was delivered to him, Roosevelt commented that the purpose of doing the research was to make sure "the Nazis don't blow us up." The U.S. nuclear project followed, although with some delay as there remained skepticism (some of it from Enrico Fermi) and also little action from the small number of officials in the government who were initially charged with moving the project forward. The following year, the U.S. Government received the Frisch–Peierls memorandum from the UK, which stated that the amount of uranium needed for a chain reaction was far lower than had previously been thought. The memorandum was a product of the MAUD Committee, which was working on the UK atomic bomb project, known as Tube Alloys, later to be subsumed within the Manhattan Project. Eventually, the first artificial nuclear reactor, Chicago Pile-1, was constructed at the University of Chicago, by a team led by Italian physicist Enrico Fermi, in late 1942. By this time, the program had been pressured for a year by U.S. entry into the war. The Chicago Pile achieved criticality on 2 December 1942 at 3:25 PM. The reactor support structure was made of wood, which supported a pile (hence the name) of graphite blocks, embedded in which was natural uranium oxide 'pseudospheres' or 'briquettes'. Soon after the Chicago Pile, the Metallurgical Laboratory developed a number of nuclear reactors for the Manhattan Project starting in 1943. The primary purpose for the largest reactors (located at the Hanford Site in Washington), was the mass production of plutonium for nuclear weapons. Fermi and Szilard applied for a patent on reactors on 19 December 1944. Its issuance was delayed for 10 years because of wartime secrecy. "World's first nuclear power plant" is the claim made by signs at the site of the EBR-I, which is now a museum near Arco, Idaho. Originally called "Chicago Pile-4", it was carried out under the direction of Walter Zinn for Argonne National Laboratory. This experimental LMFBR operated by the U.S. Atomic Energy Commission produced 0.8 kW in a test on 20 December 1951 and 100 kW (electrical) the following day, having a design output of 200 kW (electrical). Besides the military uses of nuclear reactors, there were political reasons to pursue civilian use of atomic energy. U.S. President Dwight Eisenhower made his famous Atoms for Peace speech to the UN General Assembly on 8 December 1953. This diplomacy led to the dissemination of reactor technology to U.S. institutions and worldwide. The first nuclear power plant built for civil purposes was the AM-1 Obninsk Nuclear Power Plant, launched on 27 June 1954 in the Soviet Union. It produced around 5 MW (electrical). It was built after the F-1 (nuclear reactor) which was the first reactor to go critical in Europe, and was also built by the Soviet Union. After World War II, the U.S. military sought other uses for nuclear reactor technology. Research by the Army led to the power stations for Camp Century, Greenland and McMurdo Station, Antarctica Army Nuclear Power Program. The Air Force Nuclear Bomber project resulted in the Molten-Salt Reactor Experiment. The U.S. Navy succeeded when they steamed the USS Nautilus (SSN-571) on nuclear power 17 January 1955. The first commercial nuclear power station, Calder Hall in Sellafield, England was opened in 1956 with an initial capacity of 50 MW (later 200 MW). The first portable nuclear reactor "Alco PM-2A" was used to generate electrical power (2 MW) for Camp Century from 1960 to 1963. Table by date Table by country Reactor types Classifications By type of nuclear reaction All commercial power reactors are based on nuclear fission. They generally use uranium and its product plutonium as nuclear fuel, though a thorium fuel cycle is also possible. Fission reactors can be divided roughly into two classes, depending on the energy of the neutrons that sustain the fission chain reaction: Thermal-neutron reactors use slowed or thermal neutrons to keep up the fission of their fuel. Almost all current reactors are of this type. These contain neutron moderator materials that slow neutrons until their neutron temperature is thermalized, that is, until their kinetic energy approaches the average kinetic energy of the surrounding particles. Thermal neutrons have a far higher cross section (probability) of fissioning the fissile nuclei uranium-235, plutonium-239, and plutonium-241, and a relatively lower probability of neutron capture by uranium-238 (U-238) compared to the faster neutrons that originally result from fission, allowing use of low-enriched uranium or even natural uranium fuel. The moderator is often also the coolant, usually water under high pressure to increase the boiling point. These are surrounded by a reactor vessel, instrumentation to monitor and control the reactor, radiation shielding, and a containment building. Fast-neutron reactors use fast neutrons to cause fission in their fuel. They do not have a neutron moderator, and use less-moderating coolants. Maintaining a chain reaction requires the fuel to be more highly enriched in fissile material (about 20% or more) due to the relatively lower probability of fission versus capture by U-238. Fast reactors have the potential to produce less transuranic waste because all actinides are fissionable with fast neutrons, but they are more difficult to build and more expensive to operate. Overall, fast reactors are less common than thermal reactors in most applications. Some early power stations were fast reactors, as are some Russian naval propulsion units. Construction of prototypes is continuing (see fast breeder or generation IV reactors). In principle, fusion power could be produced by nuclear fusion of elements such as the deuterium isotope of hydrogen. While an ongoing rich research topic since at least the 1940s, no self-sustaining fusion reactor for any purpose has ever been built. By moderator material Used by thermal reactors: Graphite-moderated reactors Mostly early reactors such as the Chicago pile, Obninsk am 1, Windscale piles, RBMK, Magnox, and others such as AGR use graphite as a moderator. Water moderated reactors Heavy-water reactors (Used in Canada, India, Argentina, China, Pakistan, Romania and South Korea). Light-water-moderated reactors (LWRs). Light-water reactors (the most common type of thermal reactor) use ordinary water to moderate and cool the reactors. Because the light hydrogen isotope is a slight neutron poison, these reactors need artificially enriched fuels. When at operating temperature, if the temperature of the water increases, its density drops, and fewer neutrons passing through it are slowed enough to trigger further reactions. That negative feedback stabilizes the reaction rate. Graphite and heavy-water reactors tend to be more thoroughly thermalized than light water reactors. Due to the extra thermalization, and the absence of the light hydrogen poisoning effects these types can use natural uranium/unenriched fuel. Light-element-moderated reactors. Molten-salt reactors (MSRs) are moderated by light elements such as lithium or beryllium, which are constituents of the coolant/fuel matrix salts "LiF" and "BeF2", "LiCl" and "BeCl2" and other light element containing salts can all cause a moderating effect. Liquid metal cooled reactors, such as those whose coolant is a mixture of lead and bismuth, may use BeO as a moderator. Organically moderated reactors (OMR) use biphenyl and terphenyl as moderator and coolant. By coolant Water cooled reactor. These constitute the great majority of operational nuclear reactors: as of 2014, 93% of the world's nuclear reactors are water cooled, providing about 95% of the world's total nuclear generation capacity. Pressurized water reactor (PWR) Pressurized water reactors constitute the large majority of all Western nuclear power plants. A primary characteristic of PWRs is a pressurizer, a specialized pressure vessel. Most commercial PWRs and naval reactors use pressurizers. During normal operation, a pressurizer is partially filled with water, and a steam bubble is maintained above it by heating the water with submerged heaters. During normal operation, the pressurizer is connected to the primary reactor pressure vessel (RPV) and the pressurizer "bubble" provides an expansion space for changes in water volume in the reactor. This arrangement also provides a means of pressure control for the reactor by increasing or decreasing the steam pressure in the pressurizer using the pressurizer heaters. Pressurized heavy water reactors are a subset of pressurized water reactors, sharing the use of a pressurized, isolated heat transport loop, but using heavy water as coolant and moderator for the greater neutron economies it offers. Boiling water reactor (BWR) BWRs are characterized by boiling water around the fuel rods in the lower portion of a primary reactor pressure vessel. A boiling water reactor uses 235U, enriched as uranium dioxide, as its fuel. The fuel is assembled into rods housed in a steel vessel that is submerged in water. The nuclear fission causes the water to boil, generating steam. This steam flows through pipes into turbines. The turbines are driven by the steam, and this process generates electricity. During normal operation, pressure is controlled by the amount of steam flowing from the reactor pressure vessel to the turbine. Supercritical water reactor (SCWR) SCWRs are a Generation IV reactor concept where the reactor is operated at supercritical pressures and water is heated to a supercritical fluid, which never undergoes a transition to steam yet behaves like saturated steam, to power a steam generator. Reduced moderation water reactor [RMWR] which use more highly enriched fuel with the fuel elements set closer together to allow a faster neutron spectrum sometimes called an Epithermal neutron Spectrum. Pool-type reactor can refer to unpressurized water cooled open pool reactors, but not to be confused with pool type LMFBRs which are sodium cooled Some reactors have been cooled by heavy water which also served as a moderator. Examples include: Early CANDU reactors (later ones use heavy water moderator but light water coolant) DIDO class research reactors Liquid metal cooled reactor. Since water is a moderator, it cannot be used as a coolant in a fast reactor. Liquid metal coolants have included sodium, NaK, lead, lead-bismuth eutectic, and in early reactors, mercury. Sodium-cooled fast reactor Lead-cooled fast reactor Gas cooled reactors are cooled by a circulating gas. In commercial nuclear power plants carbon dioxide has usually been used, for example in current British AGR nuclear power plants and formerly in a number of first generation British, French, Italian, and Japanese plants. Nitrogen and helium have also been used, helium being considered particularly suitable for high temperature designs. Use of the heat varies, depending on the reactor. Commercial nuclear power plants run the gas through a heat exchanger to make steam for a steam turbine. Some experimental designs run hot enough that the gas can directly power a gas turbine. Molten-salt reactors (MSRs) are cooled by circulating a molten salt, typically a eutectic mixture of fluoride salts, such as FLiBe. In a typical MSR, the coolant is also used as a matrix in which the fissile material is dissolved. Other eutectic salt combinations used include "ZrF4" with "NaF" and "LiCl" with "BeCl2". Organic nuclear reactors use organic fluids such as biphenyl and terphenyl as coolant rather than water. By generation Generation I reactor (early prototypes such as Shippingport Atomic Power Station, research reactors, non-commercial power producing reactors) Generation II reactor (most current nuclear power plants, 1965–1996) Generation III reactor (evolutionary improvements of existing designs, 1996–2016) Generation III+ reactor (evolutionary development of Gen III reactors, offering improvements in safety over Gen III reactor designs, 2017–2021) Generation IV reactor (technologies still under development; unknown start date, see below) Generation V reactor (designs which are theoretically possible, but which are not being actively considered or researched at present). In 2003, the French Commissariat à l'Énergie Atomique (CEA) was the first to refer to "Gen II" types in Nucleonics Week. The first mention of "Gen III" was in 2000, in conjunction with the launch of the Generation IV International Forum (GIF) plans. "Gen IV" was named in 2000, by the United States Department of Energy (DOE), for developing new plant types. By phase of fuel Solid fueled Fluid fueled Aqueous homogeneous reactor Molten-salt reactor Gas fueled (theoretical) By shape of the core Cubical Cylindrical Octagonal Spherical Slab Annulus By use Electricity Nuclear power plants including small modular reactors Propulsion, see nuclear propulsion Nuclear marine propulsion Various proposed forms of rocket propulsion Other uses of heat Desalination Heat for domestic and industrial heating Hydrogen production for use in a hydrogen economy Production reactors for transmutation of elements Breeder reactors are capable of producing more fissile material than they consume during the fission chain reaction (by converting fertile U-238 to Pu-239, or Th-232 to U-233). Thus, a uranium breeder reactor, once running, can be refueled with natural or even depleted uranium, and a thorium breeder reactor can be refueled with thorium; however, an initial stock of fissile material is required. Creating various radioactive isotopes, such as americium for use in smoke detectors, and cobalt-60, molybdenum-99 and others, used for imaging and medical treatment. Production of materials for nuclear weapons such as weapons-grade plutonium Providing a source of neutron radiation (for example with the pulsed Godiva device) and positron radiation (e.g. neutron activation analysis and potassium-argon dating) Research reactor: Typically reactors used for research and training, materials testing, or the production of radioisotopes for medicine and industry. These are much smaller than power reactors or those propelling ships, and many are on university campuses. There are about 280 such reactors operating, in 56 countries. Some operate with high-enriched uranium fuel, and international efforts are underway to substitute low-enriched fuel. Current technologies Pressurized water reactors (PWR) [moderator: high-pressure water; coolant: high-pressure water] These reactors use a pressure vessel to contain the nuclear fuel, control rods, moderator, and coolant. The hot radioactive water that leaves the pressure vessel is looped through a steam generator, which in turn heats a secondary (nonradioactive) loop of water to steam that can run turbines. They represent the majority (around 80%) of current reactors. This is a thermal neutron reactor design, the newest of which are the Russian VVER-1200, Japanese Advanced Pressurized Water Reactor, American AP1000, Chinese Hualong Pressurized Reactor and the Franco-German European Pressurized Reactor. All the United States Naval reactors are of this type. Boiling water reactors (BWR) [moderator: low-pressure water; coolant: low-pressure water] A BWR is like a PWR without the steam generator. The lower pressure of its cooling water allows it to boil inside the pressure vessel, producing the steam that runs the turbines. Unlike a PWR, there is no primary and secondary loop. The thermal efficiency of these reactors can be higher, and they can be simpler, and even potentially more stable and safe. This is a thermal-neutron reactor design, the newest of which are the Advanced Boiling Water Reactor and the Economic Simplified Boiling Water Reactor. Pressurized Heavy Water Reactor (PHWR) [moderator: high-pressure heavy water; coolant: high-pressure heavy water] A Canadian design (known as CANDU), very similar to PWRs but using heavy water. While heavy water is significantly more expensive than ordinary water, it has greater neutron economy (creates a higher number of thermal neutrons), allowing the reactor to operate without fuel enrichment facilities. Instead of using a single large pressure vessel as in a PWR, the fuel is contained in hundreds of pressure tubes. These reactors are fueled with natural uranium and are thermal-neutron reactor designs. PHWRs can be refueled while at full power, (online refueling) which makes them very efficient in their use of uranium (it allows for precise flux control in the core). CANDU PHWRs have been built in Canada, Argentina, China, India, Pakistan, Romania, and South Korea. India also operates a number of PHWRs, often termed 'CANDU derivatives', built after the Government of Canada halted nuclear dealings with India following the 1974 Smiling Buddha nuclear weapon test. Reaktor Bolshoy Moschnosti Kanalniy (High Power Channel Reactor) (RBMK) (also known as a Light-Water Graphite-moderated Reactor—LWGR) [moderator: graphite; coolant: high-pressure water] A Soviet design, RBMKs are in some respects similar to CANDU in that they can be refueled during power operation and employ a pressure tube design instead of a PWR-style pressure vessel. However, unlike CANDU they are unstable and large, making containment buildings for them expensive. A series of critical safety flaws have also been identified with the RBMK design, though some of these were corrected following the Chernobyl disaster. Their main attraction is their use of light water and unenriched uranium. As of 2024, 7 remain open, mostly due to safety improvements and help from international safety agencies such as the U.S. Department of Energy. Despite these safety improvements, RBMK reactors are still considered one of the most dangerous reactor designs in use. RBMK reactors were deployed only in the former Soviet Union. Gas-cooled reactor (GCR) and advanced gas-cooled reactor (AGR) [moderator: graphite; coolant: carbon dioxide] These designs have a high thermal efficiency compared with PWRs due to higher operating temperatures. There are a number of operating reactors of this design, mostly in the United Kingdom, where the concept was developed. Older designs (i.e. Magnox stations) are either shut down or will be in the near future. However, the AGRs have an anticipated life of a further 10 to 20 years. This is a thermal-neutron reactor design. Decommissioning costs can be high due to the large volume of the reactor core. Liquid metal fast-breeder reactor (LMFBR) [moderator: none; coolant: liquid metal] This totally unmoderated reactor design produces more fuel than it consumes. They are said to "breed" fuel, because they produce fissionable fuel during operation because of neutron capture. These reactors can function much like a PWR in terms of efficiency, and do not require much high-pressure containment, as the liquid metal does not need to be kept at high pressure, even at very high temperatures. These reactors are fast neutron, not thermal neutron designs. These reactors come in two types: Lead-cooled Using lead as the liquid metal provides excellent radiation shielding, and allows for operation at very high temperatures. Also, lead is (mostly) transparent to neutrons, so fewer neutrons are lost in the coolant, and the coolant does not become radioactive. Unlike sodium, lead is mostly inert, so there is less risk of explosion or accident, but such large quantities of lead may be problematic from toxicology and disposal points of view. Often a reactor of this type would use a lead-bismuth eutectic mixture. In this case, the bismuth would present some minor radiation problems, as it is not quite as transparent to neutrons, and can be transmuted to a radioactive isotope more readily than lead. The Russian Alfa class submarine uses a lead-bismuth-cooled fast reactor as its main power plant. Sodium-cooled Most LMFBRs are of this type. The TOPAZ, BN-350 and BN-600 in USSR; Superphénix in France; and Fermi-I in the United States were reactors of this type. The sodium is relatively easy to obtain and work with, and it also manages to actually prevent corrosion on the various reactor parts immersed in it. However, sodium explodes violently when exposed to water, so care must be taken, but such explosions would not be more violent than (for example) a leak of superheated fluid from a pressurized-water reactor. The Monju reactor in Japan suffered a sodium leak in 1995 and could not be restarted until May 2010. The EBR-I, the first reactor to have a core meltdown, in 1955, was also a sodium-cooled reactor. Pebble-bed reactors (PBR) [moderator: graphite; coolant: helium] These use fuel molded into ceramic balls, and then circulate gas through the balls. The result is an efficient, low-maintenance, very safe reactor with inexpensive, standardized fuel. The prototypes were the AVR and the THTR-300 in Germany, which produced up to 308MW of electricity between 1985 and 1989 until it was shut down after experiencing a series of incidents and technical difficulties. The HTR-10 is operating in China, where the HTR-PM is being developed. The HTR-PM is expected to be the first generation IV reactor to enter operation. Molten-salt reactors (MSR) [moderator: graphite, or none for fast spectrum MSRs; coolant: molten salt mixture] These dissolve the fuels in fluoride or chloride salts, or use such salts for coolant. MSRs potentially have many safety features, including the absence of high pressures or highly flammable components in the core. They were initially designed for aircraft propulsion due to their high efficiency and high power density. One prototype, the Molten-Salt Reactor Experiment, was built to confirm the feasibility of the Liquid fluoride thorium reactor, a thermal spectrum reactor which would breed fissile uranium-233 fuel from thorium. Aqueous homogeneous reactor (AHR) [moderator: high-pressure light or heavy water; coolant: high-pressure light or heavy water] These reactors use as fuel soluble nuclear salts (usually uranium sulfate or uranium nitrate) dissolved in water and mixed with the coolant and the moderator. As of April 2006, only five AHRs were in operation. Future and developing technologies Advanced reactors More than a dozen advanced reactor designs are in various stages of development. Some are evolutionary from the PWR, BWR and PHWR designs above, and some are more radical departures. The former include the advanced boiling water reactor (ABWR), two of which are now operating with others under construction, and the planned passively safe Economic Simplified Boiling Water Reactor (ESBWR) and AP1000 units (see Nuclear Power 2010 Program). The integral fast reactor (IFR) was built, tested and evaluated during the 1980s and then retired under the Clinton administration in the 1990s due to nuclear non-proliferation policies of the administration. Recycling spent fuel is the core of its design and it therefore produces only a fraction of the waste of current reactors. The pebble-bed reactor, a high-temperature gas-cooled reactor (HTGCR), is designed so high temperatures reduce power output by Doppler broadening of the fuel's neutron cross-section. It uses ceramic fuels so its safe operating temperatures exceed the power-reduction temperature range. Most designs are cooled by inert helium. Helium is not subject to steam explosions, resists neutron absorption leading to radioactivity, and does not dissolve contaminants that can become radioactive. Typical designs have more layers (up to 7) of passive containment than light water reactors (usually 3). A unique feature that may aid safety is that the fuel balls actually form the core's mechanism, and are replaced one by one as they age. The design of the fuel makes fuel reprocessing expensive. The small, sealed, transportable, autonomous reactor (SSTAR) is being primarily researched and developed in the US, intended as a fast breeder reactor that is passively safe and could be remotely shut down in case the suspicion arises that it is being tampered with. The Clean and Environmentally Safe Advanced Reactor (CAESAR) is a nuclear reactor concept that uses steam as a moderator – this design is in development. The reduced moderation water reactor builds upon the Advanced boiling water reactor ABWR) that is presently in use. It is not a complete fast reactor instead using mostly epithermal neutrons, which are between thermal and fast neutrons in speed. The hydrogen-moderated self-regulating nuclear power module (HPM) is a reactor design emanating from the Los Alamos National Laboratory that uses uranium hydride as fuel. Subcritical reactors are designed to be safer and more stable, but pose a number of engineering and economic difficulties. One example is the energy amplifier. Thorium-based reactors – It is possible to convert Thorium-232 into U-233 in reactors specially designed for the purpose. In this way, thorium, which is four times more abundant than uranium, can be used to breed U-233 nuclear fuel. U-233 is also believed to have favourable nuclear properties as compared to traditionally used U-235, including better neutron economy and lower production of long lived transuranic waste. Advanced heavy-water reactor (AHWR) – A proposed heavy water moderated nuclear power reactor that will be the next generation design of the PHWR type. Under development in the Bhabha Atomic Research Centre (BARC), India. KAMINI – A unique reactor using Uranium-233 isotope for fuel. Built in India by BARC and Indira Gandhi Center for Atomic Research (IGCAR). India is also planning to build fast breeder reactors using the thorium – Uranium-233 fuel cycle. The FBTR (Fast Breeder Test Reactor) in operation at Kalpakkam (India) uses Plutonium as a fuel and liquid sodium as a coolant. China, which has control of the Cerro Impacto deposit, has a reactor and hopes to replace coal energy with nuclear energy. Rolls-Royce aims to sell nuclear reactors for the production of synfuel for aircraft. Generation IV reactors Generation IV reactors are a set of theoretical nuclear reactor designs. These are generally not expected to be available for commercial use before 2040–2050, although the World Nuclear Association suggested that some might enter commercial operation before 2030. Current reactors in operation around the world are generally considered second- or third-generation systems, with the first-generation systems having been retired some time ago. Research into these reactor types was officially started by the Generation IV International Forum (GIF) based on eight technology goals. The primary goals being to improve nuclear safety, improve proliferation resistance, minimize waste and natural resource utilization, and to decrease the cost to build and run such plants. Gas-cooled fast reactor Lead-cooled fast reactor Molten-salt reactor Sodium-cooled fast reactor Supercritical water reactor Very-high-temperature reactor Generation V+ reactors Generation V reactors are designs which are theoretically possible, but which are not being actively considered or researched at present. Though some generation V reactors could potentially be built with current or near term technology, they trigger little interest for reasons of economics, practicality, or safety. Liquid-core reactor. A closed loop liquid-core nuclear reactor, where the fissile material is molten uranium or uranium solution cooled by a working gas pumped in through holes in the base of the containment vessel. Gas-core reactor. A closed loop version of the nuclear lightbulb rocket, where the fissile material is gaseous uranium hexafluoride contained in a fused silica vessel. A working gas (such as hydrogen) would flow around this vessel and absorb the UV light produced by the reaction. This reactor design could also function as a rocket engine, as featured in Harry Harrison's 1976 science-fiction novel Skyfall. In theory, using UF6 as a working fuel directly (rather than as a stage to one, as is done now) would mean lower processing costs, and very small reactors. In practice, running a reactor at such high power densities would probably produce unmanageable neutron flux, weakening most reactor materials, and therefore as the flux would be similar to that expected in fusion reactors, it would require similar materials to those selected by the International Fusion Materials Irradiation Facility. Gas core EM reactor. As in the gas core reactor, but with photovoltaic arrays converting the UV light directly to electricity. This approach is similar to the experimentally proved photoelectric effect that would convert the X-rays generated from aneutronic fusion into electricity, by passing the high energy photons through an array of conducting foils to transfer some of their energy to electrons, the energy of the photon is captured electrostatically, similar to a capacitor. Since X-rays can go through far greater material thickness than electrons, many hundreds or thousands of layers are needed to absorb the X-rays. Fission fragment reactor. A fission fragment reactor is a nuclear reactor that generates electricity by decelerating an ion beam of fission byproducts instead of using nuclear reactions to generate heat. By doing so, it bypasses the Carnot cycle and can achieve efficiencies of up to 90% instead of 40–45% attainable by efficient turbine-driven thermal reactors. The fission fragment ion beam would be passed through a magnetohydrodynamic generator to produce electricity. Hybrid nuclear fusion. Would use the neutrons emitted by fusion to fission a blanket of fertile material, like U-238 or Th-232 and transmute other reactor's spent nuclear fuel/nuclear waste into relatively more benign isotopes. Fusion reactors Controlled nuclear fusion could in principle be used in fusion power plants to produce power without the complexities of handling actinides, but significant scientific and technical obstacles remain. Despite research having started in the 1950s, no commercial fusion reactor is expected before 2050. The ITER project is currently leading the effort to harness fusion power. Nuclear fuel cycle Thermal reactors generally depend on refined and enriched uranium. Some nuclear reactors can operate with a mixture of plutonium and uranium (see MOX). The process by which uranium ore is mined, processed, enriched, used, possibly reprocessed and disposed of is known as the nuclear fuel cycle. Under 1% of the uranium found in nature is the easily fissionable U-235 isotope and as a result most reactor designs require enriched fuel. Enrichment involves increasing the percentage of U-235 and is usually done by means of gaseous diffusion or gas centrifuge. The enriched result is then converted into uranium dioxide powder, which is pressed and fired into pellet form. These pellets are stacked into tubes which are then sealed and called fuel rods. Many of these fuel rods are used in each nuclear reactor. Most BWR and PWR commercial reactors use uranium enriched to about 4% U-235, and some commercial reactors with a high neutron economy do not require the fuel to be enriched at all (that is, they can use natural uranium). According to the International Atomic Energy Agency there are at least 100 research reactors in the world fueled by highly enriched (weapons-grade/90% enrichment) uranium. Theft risk of this fuel (potentially used in the production of a nuclear weapon) has led to campaigns advocating conversion of this type of reactor to low-enrichment uranium (which poses less threat of proliferation). Fissile U-235 and non-fissile but fissionable and fertile U-238 are both used in the fission process. U-235 is fissionable by thermal (i.e. slow-moving) neutrons. A thermal neutron is one which is moving about the same speed as the atoms around it. Since all atoms vibrate proportionally to their absolute temperature, a thermal neutron has the best opportunity to fission U-235 when it is moving at this same vibrational speed. On the other hand, U-238 is more likely to capture a neutron when the neutron is moving very fast. This U-239 atom will soon decay into plutonium-239, which is another fuel. Pu-239 is a viable fuel and must be accounted for even when a highly enriched uranium fuel is used. Plutonium fissions will dominate the U-235 fissions in some reactors, especially after the initial loading of U-235 is spent. Plutonium is fissionable with both fast and thermal neutrons, which make it ideal for either nuclear reactors or nuclear bombs. Most reactor designs in existence are thermal reactors and typically use water as a neutron moderator (moderator means that it slows down the neutron to a thermal speed) and as a coolant. But in a fast breeder reactor, some other kind of coolant is used which will not moderate or slow the neutrons down much. This enables fast neutrons to dominate, which can effectively be used to constantly replenish the fuel supply. By merely placing cheap unenriched uranium into such a core, the non-fissionable U-238 will be turned into Pu-239, "breeding" fuel. In thorium fuel cycle thorium-232 absorbs a neutron in either a fast or thermal reactor. The thorium-233 beta decays to protactinium-233 and then to uranium-233, which in turn is used as fuel. Hence, like uranium-238, thorium-232 is a fertile material. Fueling of nuclear reactors The amount of energy in the reservoir of nuclear fuel is frequently expressed in terms of "full-power days," which is the number of 24-hour periods (days) a reactor is scheduled for operation at full power output for the generation of heat energy. The number of full-power days in a reactor's operating cycle (between refueling outage times) is related to the amount of fissile uranium-235 (U-235) contained in the fuel assemblies at the beginning of the cycle. A higher percentage of U-235 in the core at the beginning of a cycle will permit the reactor to be run for a greater number of full-power days. At the end of the operating cycle, the fuel in some of the assemblies is "spent", having spent four to six years in the reactor producing power. This spent fuel is discharged and replaced with new (fresh) fuel assemblies. Though considered "spent," these fuel assemblies contain a large quantity of fuel. In practice it is economics that determines the lifetime of nuclear fuel in a reactor. Long before all possible fission has taken place, the reactor is unable to maintain 100%, full output power, and therefore, income for the utility lowers as plant output power lowers. Most nuclear plants operate at a very low profit margin due to operating overhead, mainly regulatory costs, so operating below 100% power is not economically viable for very long. The fraction of the reactor's fuel core replaced during refueling is typically one-third, but depends on how long the plant operates between refueling. Plants typically operate on 18 month refueling cycles, or 24 month refueling cycles. This means that one refueling, replacing only one-third of the fuel, can keep a nuclear reactor at full power for nearly two years. The disposition and storage of this spent fuel is one of the most challenging aspects of the operation of a commercial nuclear power plant. This nuclear waste is highly radioactive and its toxicity presents a danger for thousands of years. After being discharged from the reactor, spent nuclear fuel is transferred to the on-site spent fuel pool. The spent fuel pool is a large pool of water that provides cooling and shielding of the spent nuclear fuel as well as limit radiation exposure to on-site personnel. Once the energy has decayed somewhat (approximately five years), the fuel can be transferred from the fuel pool to dry shielded casks, that can be safely stored for thousands of years. After loading into dry shielded casks, the casks are stored on-site in a specially guarded facility in impervious concrete bunkers. On-site fuel storage facilities are designed to withstand the impact of commercial airliners, with little to no damage to the spent fuel. An average on-site fuel storage facility can hold 30 years of spent fuel in a space smaller than a football field. Not all reactors need to be shut down for refueling; for example, pebble bed reactors, RBMK reactors, molten-salt reactors, Magnox, AGR and CANDU reactors allow fuel to be shifted through the reactor while it is running. In a CANDU reactor, this also allows individual fuel elements to be situated within the reactor core that are best suited to the amount of U-235 in the fuel element. The amount of energy extracted from nuclear fuel is called its burnup, which is expressed in terms of the heat energy produced per initial unit of fuel weight. Burnup is commonly expressed as megawatt days thermal per metric ton of initial heavy metal. Nuclear safety Nuclear safety covers the actions taken to prevent nuclear and radiation accidents and incidents or to limit their consequences. The nuclear power industry has improved the safety and performance of reactors, and has proposed new, safer (but generally untested) reactor designs but there is no guarantee that the reactors will be designed, built and operated correctly. Mistakes do occur and the designers of reactors at Fukushima in Japan did not anticipate that a tsunami generated by an earthquake would disable the backup systems that were supposed to stabilize the reactor after the earthquake, despite multiple warnings by the NRG and the Japanese nuclear safety administration. According to UBS AG, the Fukushima I nuclear accidents have cast doubt on whether even an advanced economy like Japan can master nuclear safety. Catastrophic scenarios involving terrorist attacks are also conceivable. An interdisciplinary team from MIT has estimated that given the expected growth of nuclear power from 2005 to 2055, at least four serious nuclear accidents would be expected in that period. Nuclear accidents Serious, though rare, nuclear and radiation accidents have occurred. These include the Windscale fire (October 1957), the SL-1 accident (1961), the Three Mile Island accident (1979), Chernobyl disaster (April 1986), and the Fukushima Daiichi nuclear disaster (March 2011). Nuclear-powered submarine mishaps include the K-19 reactor accident (1961), the K-27 reactor accident (1968), and the K-431 reactor accident (1985). Nuclear reactors have been launched into Earth orbit at least 34 times. A number of incidents connected with the unmanned nuclear-reactor-powered Soviet RORSAT especially Kosmos 954 radar satellite which resulted in nuclear fuel reentering the Earth's atmosphere from orbit and being dispersed in northern Canada (January 1978). Natural nuclear reactors Almost two billion years ago a series of self-sustaining nuclear fission "reactors" self-assembled in the area now known as Oklo in Gabon, West Africa. The conditions at that place and time allowed a natural nuclear fission to occur with circumstances that are similar to the conditions in a constructed nuclear reactor. Fifteen fossil natural fission reactors have so far been found in three separate ore deposits at the Oklo uranium mine in Gabon. First discovered in 1972 by French physicist Francis Perrin, they are collectively known as the Oklo Fossil Reactors. Self-sustaining nuclear fission reactions took place in these reactors approximately 1.5 billion years ago, and ran for a few hundred thousand years, averaging 100 kW of power output during that time. The concept of a natural nuclear reactor was theorized as early as 1956 by Paul Kuroda at the University of Arkansas. Such reactors can no longer form on Earth in its present geologic period. Radioactive decay of formerly more abundant uranium-235 over the time span of hundreds of millions of years has reduced the proportion of this naturally occurring fissile isotope to below the amount required to sustain a chain reaction with only plain water as a moderator. The natural nuclear reactors formed when a uranium-rich mineral deposit became inundated with groundwater that acted as a neutron moderator, and a strong chain reaction took place. The water moderator would boil away as the reaction increased, slowing it back down again and preventing a meltdown. The fission reaction was sustained for hundreds of thousands of years, cycling on the order of hours to a few days. These natural reactors are extensively studied by scientists interested in geologic radioactive waste disposal. They offer a case study of how radioactive isotopes migrate through the Earth's crust. This is a significant area of controversy as opponents of geologic waste disposal fear that isotopes from stored waste could end up in water supplies or be carried into the environment. Emissions Nuclear reactors produce tritium as part of normal operations, which is eventually released into the environment in trace quantities. As an isotope of hydrogen, tritium (T) frequently binds to oxygen and forms T2O. This molecule is chemically identical to H2O and so is both colorless and odorless, however the additional neutrons in the hydrogen nuclei cause the tritium to undergo beta decay with a half-life of 12.3 years. Despite being measurable, the tritium released by nuclear power plants is minimal. The United States NRC estimates that a person drinking water for one year out of a well contaminated by what they would consider to be a significant tritiated water spill would receive a radiation dose of 0.3 millirem. For comparison, this is an order of magnitude less than the 4 millirem a person receives on a round trip flight from Washington, D.C. to Los Angeles, a consequence of less atmospheric protection against highly energetic cosmic rays at high altitudes. The amounts of strontium-90 released from nuclear power plants under normal operations is so low as to be undetectable above natural background radiation. Detectable strontium-90 in ground water and the general environment can be traced to weapons testing that occurred during the mid-20th century (accounting for 99% of the Strontium-90 in the environment) and the Chernobyl accident (accounting for the remaining 1%).
Technology
Power generation
null
22153
https://en.wikipedia.org/wiki/Nuclear%20power
Nuclear power
Nuclear power is the use of nuclear reactions to produce electricity. Nuclear power can be obtained from nuclear fission, nuclear decay and nuclear fusion reactions. Presently, the vast majority of electricity from nuclear power is produced by nuclear fission of uranium and plutonium in nuclear power plants. Nuclear decay processes are used in niche applications such as radioisotope thermoelectric generators in some space probes such as Voyager 2. Reactors producing controlled fusion power have been operated since 1958 but have yet to generate net power and are not expected to be commercially available in the near future. The first nuclear power plant was built in the 1950s. The global installed nuclear capacity grew to 100GW in the late 1970s, and then expanded during the 1980s, reaching 300GW by 1990. The 1979 Three Mile Island accident in the United States and the 1986 Chernobyl disaster in the Soviet Union resulted in increased regulation and public opposition to nuclear power plants. Nuclear power plants supplied 2,602 terawatt hours (TWh) of electricity in 2023, equivalent to about 9% of global electricity generation, and were the second largest low-carbon power source after hydroelectricity. there are 415 civilian fission reactors in the world, with overall capacity of 374GW, 66 under construction and 87 planned, with a combined capacity of 72GW and 84GW, respectively. The United States has the largest fleet of nuclear reactors, generating almost 800TWh of low-carbon electricity per year with an average capacity factor of 92%. The average global capacity factor is 89%. Most new reactors under construction are generation III reactors in Asia. Nuclear power is a safe, sustainable energy source that reduces carbon emissions. This is because nuclear power generation causes one of the lowest levels of fatalities per unit of energy generated compared to other energy sources. "Economists estimate that each nuclear plant built could save more than 800,000 life years." Coal, petroleum, natural gas and hydroelectricity have each caused more fatalities per unit of energy due to air pollution and accidents. Nuclear power plants also emit no greenhouse gases and result in less life-cycle carbon emissions than common "renewables". The radiological hazards associated with nuclear power are the primary motivations of the anti-nuclear movement, which contends that nuclear power poses threats to people and the environment, citing the potential for accidents like the Fukushima nuclear disaster in Japan in 2011, and is too expensive to deploy when compared to alternative sustainable energy sources. History Origins The process of nuclear fission was discovered in 1938 after over four decades of work on the science of radioactivity and the elaboration of new nuclear physics that described the components of atoms. Soon after the discovery of the fission process, it was realized that neutrons released by a fissioning nucleus could, under the right conditions, induce fissions in nearby nuclei, thus initiating a self-sustaining chain reaction. Once this was experimentally confirmed in 1939, scientists in many countries petitioned their governments for support for nuclear fission research, just on the cusp of World War II, in order to develop a nuclear weapon. In the United States, these research efforts led to the creation of the first human-made nuclear reactor, the Chicago Pile-1 under the Stagg Field stadium at the University of Chicago, which achieved criticality on December 2, 1942. The reactor's development was part of the Manhattan Project, the Allied effort to create atomic bombs during World War II. It led to the building of larger single-purpose production reactors for the production of weapons-grade plutonium for use in the first nuclear weapons. The United States tested the first nuclear weapon in July 1945, the Trinity test, and the atomic bombings of Hiroshima and Nagasaki happened one month later. Despite the military nature of the first nuclear devices, there was strong optimism in the 1940s and 1950s that nuclear power could provide cheap and endless energy. Electricity was generated for the first time by a nuclear reactor on December 20, 1951, at the EBR-I experimental station near Arco, Idaho, which initially produced about 100kW. In 1953, American President Dwight Eisenhower gave his "Atoms for Peace" speech at the United Nations, emphasizing the need to develop "peaceful" uses of nuclear power quickly. This was followed by the Atomic Energy Act of 1954 which allowed rapid declassification of U.S. reactor technology and encouraged development by the private sector. First power generation The first organization to develop practical nuclear power was the U.S. Navy, with the S1W reactor for the purpose of propelling submarines and aircraft carriers. The first nuclear-powered submarine, , was put to sea in January 1954. The S1W reactor was a pressurized water reactor. This design was chosen because it was simpler, more compact, and easier to operate compared to alternative designs, thus more suitable to be used in submarines. This decision would result in the PWR being the reactor of choice also for power generation, thus having a lasting impact on the civilian electricity market in the years to come. On June 27, 1954, the Obninsk Nuclear Power Plant in the USSR became the world's first nuclear power plant to generate electricity for a power grid, producing around 5 megawatts of electric power. The world's first commercial nuclear power station, Calder Hall at Windscale, England was connected to the national power grid on 27 August 1956. In common with a number of other generation I reactors, the plant had the dual purpose of producing electricity and plutonium-239, the latter for the nascent nuclear weapons program in Britain. Expansion and first opposition The total global installed nuclear capacity initially rose relatively quickly, rising from less than 1 gigawatt (GW) in 1960 to 100GW in the late 1970s. During the 1970s and 1980s rising economic costs (related to extended construction times largely due to regulatory changes and pressure-group litigation) and falling fossil fuel prices made nuclear power plants then under construction less attractive. In the 1980s in the U.S. and 1990s in Europe, the flat electric grid growth and electricity liberalization also made the addition of large new baseload energy generators economically unattractive. The 1973 oil crisis had a significant effect on countries, such as France and Japan, which had relied more heavily on oil for electric generation to invest in nuclear power. France would construct 25 nuclear power plants over the next 15 years, and as of 2019, 71% of French electricity was generated by nuclear power, the highest percentage by any nation in the world. Some local opposition to nuclear power emerged in the United States in the early 1960s. In the late 1960s, some members of the scientific community began to express pointed concerns. These anti-nuclear concerns related to nuclear accidents, nuclear proliferation, nuclear terrorism and radioactive waste disposal. In the early 1970s, there were large protests about a proposed nuclear power plant in Wyhl, Germany. The project was cancelled in 1975. The anti-nuclear success at Wyhl inspired opposition to nuclear power in other parts of Europe and North America. By the mid-1970s anti-nuclear activism gained a wider appeal and influence, and nuclear power began to become an issue of major public protest. In some countries, the nuclear power conflict "reached an intensity unprecedented in the history of technology controversies". The increased public hostility to nuclear power led to a longer license procurement process, more regulations and increased requirements for safety equipment, which made new construction much more expensive. In the United States, over 120 Light Water Reactor proposals were ultimately cancelled and the construction of new reactors ground to a halt. The 1979 accident at Three Mile Island with no fatalities, played a major part in the reduction in the number of new plant constructions in many countries. Chernobyl and renaissance During the 1980s one new nuclear reactor started up every 17 days on average. By the end of the decade, global installed nuclear capacity reached 300GW. Since the late 1980s, new capacity additions slowed significantly, with the installed nuclear capacity reaching 366GW in 2005. The 1986 Chernobyl disaster in the USSR, involving an RBMK reactor, altered the development of nuclear power and led to a greater focus on meeting international safety and regulatory standards. It is considered the worst nuclear disaster in history both in total casualties, with 56 direct deaths, and financially, with the cleanup and the cost estimated at 18billionRbls (US$68billion in 2019, adjusted for inflation). The international organization to promote safety awareness and the professional development of operators in nuclear facilities, the World Association of Nuclear Operators (WANO), was created as a direct outcome of the 1986 Chernobyl accident. The Chernobyl disaster played a major part in the reduction in the number of new plant constructions in the following years. Influenced by these events, Italy voted against nuclear power in a 1987 referendum, becoming the first country to completely phase out nuclear power in 1990. In the early 2000s, nuclear energy was expecting a nuclear renaissance, an increase in the construction of new reactors, due to concerns about carbon dioxide emissions. During this period, newer generation III reactors, such as the EPR began construction. Fukushima accident Prospects of a nuclear renaissance were delayed by another nuclear accident. The 2011 Fukushima Daiichi nuclear accident was caused by the Tōhoku earthquake and tsunami, one of the largest earthquakes ever recorded. The Fukushima Daiichi Nuclear Power Plant suffered three core meltdowns due to failure of the emergency cooling system for lack of electricity supply. This resulted in the most serious nuclear accident since the Chernobyl disaster. The accident prompted a re-examination of nuclear safety and nuclear energy policy in many countries. Germany approved plans to close all its reactors by 2022, and many other countries reviewed their nuclear power programs. Following the disaster, Japan shut down all of its nuclear power reactors, some of them permanently, and in 2015 began a gradual process to restart the remaining 40 reactors, following safety checks and based on revised criteria for operations and public approval. In 2022, the Japanese government, under the leadership of Prime Minister Fumio Kishida, declared that 10 more nuclear power plants were to be reopened since the 2011 disaster. Kishida is also pushing for research and construction of new safer nuclear plants to safeguard Japanese consumers from the fluctuating price of the fossil fuel market and reduce Japan's greenhouse gas emissions. Kishida intends to have Japan become a significant exporter of nuclear energy and technology to developing countries around the world. Current prospects By 2015, the IAEA's outlook for nuclear energy had become more promising, recognizing the importance of low-carbon generation for mitigating climate change. , the global trend was for new nuclear power stations coming online to be balanced by the number of old plants being retired. In 2016, the U.S. Energy Information Administration projected for its "base case" that world nuclear power generation would increase from 2,344 terawatt hours (TWh) in 2012 to 4,500TWh in 2040. Most of the predicted increase was expected to be in Asia. As of 2018, there were over 150 nuclear reactors planned including 50 under construction. In January 2019, China had 45 reactors in operation, 13 under construction, and planned to build 43 more, which would make it the world's largest generator of nuclear electricity. As of 2021, 17 reactors were reported to be under construction. China built significantly fewer reactors than originally planned. Its share of electricity from nuclear power was 5% in 2019 and observers have cautioned that, along with the risks, the changing economics of energy generation may cause new nuclear energy plants to "no longer make sense in a world that is leaning toward cheaper, more reliable renewable energy". In October 2021, the Japanese cabinet approved the new Plan for Electricity Generation to 2030 prepared by the Agency for Natural Resources and Energy (ANRE) and an advisory committee, following public consultation. The nuclear target for 2030 requires the restart of another ten reactors. Prime Minister Fumio Kishida in July 2022 announced that the country should consider building advanced reactors and extending operating licences beyond 60 years. As of 2022, with world oil and gas prices on the rise, while Germany is restarting its coal plants to deal with loss of Russian gas that it needs to supplement its , many other countries have announced ambitious plans to reinvigorate ageing nuclear generating capacity with new investments. French President Emmanuel Macron announced his intention to build six new reactors in coming decades, placing nuclear at the heart of France's drive for carbon neutrality by 2050. Meanwhile, in the United States, the Department of Energy, in collaboration with commercial entities, TerraPower and X-energy, is planning on building two different advanced nuclear reactors by 2027, with further plans for nuclear implementation in its long term green energy and energy security goals. Power plants Nuclear power plants are thermal power stations that generate electricity by harnessing the thermal energy released from nuclear fission. A fission nuclear power plant is generally composed of: a nuclear reactor, in which the nuclear reactions generating heat take place; a cooling system, which removes the heat from inside the reactor; a steam turbine, which transforms the heat into mechanical energy; an electric generator, which transforms the mechanical energy into electrical energy. When a neutron hits the nucleus of a uranium-235 or plutonium atom, it can split the nucleus into two smaller nuclei, which is a nuclear fission reaction. The reaction releases energy and neutrons. The released neutrons can hit other uranium or plutonium nuclei, causing new fission reactions, which release more energy and more neutrons. This is called a chain reaction. In most commercial reactors, the reaction rate is contained by control rods that absorb excess neutrons. The controllability of nuclear reactors depends on the fact that a small fraction of neutrons resulting from fission are delayed. The time delay between the fission and the release of the neutrons slows changes in reaction rates and gives time for moving the control rods to adjust the reaction rate. Fuel cycle The life cycle of nuclear fuel starts with uranium mining. The uranium ore is then converted into a compact ore concentrate form, known as yellowcake (U3O8), to facilitate transport. Fission reactors generally need uranium-235, a fissile isotope of uranium. The concentration of uranium-235 in natural uranium is low (about 0.7%). Some reactors can use this natural uranium as fuel, depending on their neutron economy. These reactors generally have graphite or heavy water moderators. For light water reactors, the most common type of reactor, this concentration is too low, and it must be increased by a process called uranium enrichment. In civilian light water reactors, uranium is typically enriched to 3.55% uranium-235. The uranium is then generally converted into uranium oxide (UO2), a ceramic, that is then compressively sintered into fuel pellets, a stack of which forms fuel rods of the proper composition and geometry for the particular reactor. After some time in the reactor, the fuel will have reduced fissile material and increased fission products, until its use becomes impractical. At this point, the spent fuel will be moved to a spent fuel pool which provides cooling for the thermal heat and shielding for ionizing radiation. After several months or years, the spent fuel is radioactively and thermally cool enough to be moved to dry storage casks or reprocessed. Uranium resources Uranium is a fairly common element in the Earth's crust: it is approximately as common as tin or germanium, and is about 40 times more common than silver. Uranium is present in trace concentrations in most rocks, dirt, and ocean water, but is generally economically extracted only where it is present in relatively high concentrations. Uranium mining can be underground, open-pit, or in-situ leach mining. An increasing number of the highest output mines are remote underground operations, such as McArthur River uranium mine, in Canada, which by itself accounts for 13% of global production. As of 2011 the world's known resources of uranium, economically recoverable at the arbitrary price ceiling of US$130/kg, were enough to last for between 70 and 100 years. In 2007, the OECD estimated 670 years of economically recoverable uranium in total conventional resources and phosphate ores assuming the then-current use rate. Light water reactors make relatively inefficient use of nuclear fuel, mostly using only the very rare uranium-235 isotope. Nuclear reprocessing can make this waste reusable, and newer reactors also achieve a more efficient use of the available resources than older ones. With a pure fast reactor fuel cycle with a burn up of all the uranium and actinides (which presently make up the most hazardous substances in nuclear waste), there is an estimated 160,000 years worth of uranium in total conventional resources and phosphate ore at the price of 60–100 US$/kg. However, reprocessing is expensive, possibly dangerous and can be used to manufacture nuclear weapons. One analysis found that uranium prices could increase by two orders of magnitude between 2035 and 2100 and that there could be a shortage near the end of the century. A 2017 study by researchers from MIT and WHOI found that "at the current consumption rate, global conventional reserves of terrestrial uranium (approximately 7.6 million tonnes) could be depleted in a little over a century". Limited uranium-235 supply may inhibit substantial expansion with the current nuclear technology. While various ways to reduce dependence on such resources are being explored, new nuclear technologies are considered to not be available in time for climate change mitigation purposes or competition with alternatives of renewables in addition to being more expensive and require costly research and development. A study found it to be uncertain whether identified resources will be developed quickly enough to provide uninterrupted fuel supply to expanded nuclear facilities and various forms of mining may be challenged by ecological barriers, costs, and land requirements. Researchers also report considerable import dependence of nuclear energy. Unconventional uranium resources also exist. Uranium is naturally present in seawater at a concentration of about 3 micrograms per liter, with 4.4 billion tons of uranium considered present in seawater at any time. In 2014 it was suggested that it would be economically competitive to produce nuclear fuel from seawater if the process was implemented at large scale. Like fossil fuels, over geological timescales, uranium extracted on an industrial scale from seawater would be replenished by both river erosion of rocks and the natural process of uranium dissolved from the surface area of the ocean floor, both of which maintain the solubility equilibria of seawater concentration at a stable level. Some commentators have argued that this strengthens the case for nuclear power to be considered a renewable energy. Waste The normal operation of nuclear power plants and facilities produce radioactive waste, or nuclear waste. This type of waste is also produced during plant decommissioning. There are two broad categories of nuclear waste: low-level waste and high-level waste. The first has low radioactivity and includes contaminated items such as clothing, which poses limited threat. High-level waste is mainly the spent fuel from nuclear reactors, which is very radioactive and must be cooled and then safely disposed of or reprocessed. High-level waste The most important waste stream from nuclear power reactors is spent nuclear fuel, which is considered high-level waste. For Light Water Reactors (LWRs), spent fuel is typically composed of 95% uranium, 4% fission products, and about 1% transuranic actinides (mostly plutonium, neptunium and americium). The fission products are responsible for the bulk of the short-term radioactivity, whereas the plutonium and other transuranics are responsible for the bulk of the long-term radioactivity. High-level waste (HLW) must be stored isolated from the biosphere with sufficient shielding so as to limit radiation exposure. After being removed from the reactors, used fuel bundles are stored for six to ten years in spent fuel pools, which provide cooling and shielding against radiation. After that, the fuel is cool enough that it can be safely transferred to dry cask storage. The radioactivity decreases exponentially with time, such that it will have decreased by 99.5% after 100 years. The more intensely radioactive short-lived fission products (SLFPs) decay into stable elements in approximately 300 years, and after about 100,000 years, the spent fuel becomes less radioactive than natural uranium ore. Commonly suggested methods to isolate LLFP waste from the biosphere include separation and transmutation, synroc treatments, or deep geological storage. Thermal-neutron reactors, which presently constitute the majority of the world fleet, cannot burn up the reactor grade plutonium that is generated during the reactor operation. This limits the life of nuclear fuel to a few years. In some countries, such as the United States, spent fuel is classified in its entirety as a nuclear waste. In other countries, such as France, it is largely reprocessed to produce a partially recycled fuel, known as mixed oxide fuel or MOX. For spent fuel that does not undergo reprocessing, the most concerning isotopes are the medium-lived transuranic elements, which are led by reactor-grade plutonium (half-life 24,000 years). Some proposed reactor designs, such as the integral fast reactor and molten salt reactors, can use as fuel the plutonium and other actinides in spent fuel from light water reactors, thanks to their fast fission spectrum. This offers a potentially more attractive alternative to deep geological disposal. The thorium fuel cycle results in similar fission products, though creates a much smaller proportion of transuranic elements from neutron capture events within a reactor. Spent thorium fuel, although more difficult to handle than spent uranium fuel, may present somewhat lower proliferation risks. Low-level waste The nuclear industry also produces a large volume of low-level waste, with low radioactivity, in the form of contaminated items like clothing, hand tools, water purifier resins, and (upon decommissioning) the materials of which the reactor itself is built. Low-level waste can be stored on-site until radiation levels are low enough to be disposed of as ordinary waste, or it can be sent to a low-level waste disposal site. Waste relative to other types In countries with nuclear power, radioactive wastes account for less than 1% of total industrial toxic wastes, much of which remains hazardous for long periods. Overall, nuclear power produces far less waste material by volume than fossil-fuel based power plants. Coal-burning plants, in particular, produce large amounts of toxic and mildly radioactive ash resulting from the concentration of naturally occurring radioactive materials in coal. A 2008 report from Oak Ridge National Laboratory concluded that coal power actually results in more radioactivity being released into the environment than nuclear power operation, and that the population effective dose equivalent from radiation from coal plants is 100 times that from the operation of nuclear plants. Although coal ash is much less radioactive than spent nuclear fuel by weight, coal ash is produced in much higher quantities per unit of energy generated. It is also released directly into the environment as fly ash, whereas nuclear plants use shielding to protect the environment from radioactive materials. Nuclear waste volume is small compared to the energy produced. For example, at Yankee Rowe Nuclear Power Station, which generated 44 billion kilowatt hours of electricity when in service, its complete spent fuel inventory is contained within sixteen casks. It is estimated that to produce a lifetime supply of energy for a person at a western standard of living (approximately 3GWh) would require on the order of the volume of a soda can of low enriched uranium, resulting in a similar volume of spent fuel generated. Waste disposal Following interim storage in a spent fuel pool, the bundles of used fuel rod assemblies of a typical nuclear power station are often stored on site in dry cask storage vessels. Presently, waste is mainly stored at individual reactor sites and there are over 430 locations around the world where radioactive material continues to accumulate. Disposal of nuclear waste is often considered the most politically divisive aspect in the lifecycle of a nuclear power facility. The lack of movement of nuclear waste in the 2 billion year old natural nuclear fission reactors in Oklo, Gabon is cited as "a source of essential information today." Experts suggest that centralized underground repositories which are well-managed, guarded, and monitored, would be a vast improvement. There is an "international consensus on the advisability of storing nuclear waste in deep geological repositories". With the advent of new technologies, other methods including horizontal drillhole disposal into geologically inactive areas have been proposed. There are no commercial scale purpose built underground high-level waste repositories in operation. However, in Finland the Onkalo spent nuclear fuel repository of the Olkiluoto Nuclear Power Plant was under construction as of 2015. Reprocessing Most thermal-neutron reactors run on a once-through nuclear fuel cycle, mainly due to the low price of fresh uranium. However, many reactors are also fueled with recycled fissionable materials that remain in spent nuclear fuel. The most common fissionable material that is recycled is the reactor-grade plutonium (RGPu) that is extracted from spent fuel. It is mixed with uranium oxide and fabricated into mixed-oxide or MOX fuel. Because thermal LWRs remain the most common reactor worldwide, this type of recycling is the most common. It is considered to increase the sustainability of the nuclear fuel cycle, reduce the attractiveness of spent fuel to theft, and lower the volume of high level nuclear waste. Spent MOX fuel cannot generally be recycled for use in thermal-neutron reactors. This issue does not affect fast-neutron reactors, which are therefore preferred in order to achieve the full energy potential of the original uranium. The main constituent of spent fuel from LWRs is slightly enriched uranium. This can be recycled into reprocessed uranium (RepU), which can be used in a fast reactor, used directly as fuel in CANDU reactors, or re-enriched for another cycle through an LWR. Re-enriching of reprocessed uranium is common in France and Russia. Reprocessed uranium is also safer in terms of nuclear proliferation potential. Reprocessing has the potential to recover up to 95% of the uranium and plutonium fuel in spent nuclear fuel, as well as reduce long-term radioactivity within the remaining waste. However, reprocessing has been politically controversial because of the potential for nuclear proliferation and varied perceptions of increasing the vulnerability to nuclear terrorism. Reprocessing also leads to higher fuel cost compared to the once-through fuel cycle. While reprocessing reduces the volume of high-level waste, it does not reduce the fission products that are the primary causes of residual heat generation and radioactivity for the first few centuries outside the reactor. Thus, reprocessed waste still requires an almost identical treatment for the initial first few hundred years. Reprocessing of civilian fuel from power reactors is currently done in France, the United Kingdom, Russia, Japan, and India. In the United States, spent nuclear fuel is currently not reprocessed. The La Hague reprocessing facility in France has operated commercially since 1976 and is responsible for half the world's reprocessing as of 2010. It produces MOX fuel from spent fuel derived from several countries. More than 32,000 tonnes of spent fuel had been reprocessed as of 2015, with the majority from France, 17% from Germany, and 9% from Japan. Breeding Breeding is the process of converting non-fissile material into fissile material that can be used as nuclear fuel. The non-fissile material that can be used for this process is called fertile material, and constitute the vast majority of current nuclear waste. This breeding process occurs naturally in breeder reactors. As opposed to light water thermal-neutron reactors, which use uranium-235 (0.7% of all natural uranium), fast-neutron breeder reactors use uranium-238 (99.3% of all natural uranium) or thorium. A number of fuel cycles and breeder reactor combinations are considered to be sustainable or renewable sources of energy. In 2006 it was estimated that with seawater extraction, there was likely five billion years' worth of uranium resources for use in breeder reactors. Breeder technology has been used in several reactors, but as of 2006, the high cost of reprocessing fuel safely requires uranium prices of more than US$200/kg before becoming justified economically. Breeder reactors are however being developed for their potential to burn all of the actinides (the most active and dangerous components) in the present inventory of nuclear waste, while also producing power and creating additional quantities of fuel for more reactors via the breeding process. As of 2017, there are two breeders producing commercial power, BN-600 reactor and the BN-800 reactor, both in Russia. The Phénix breeder reactor in France was powered down in 2009 after 36 years of operation. Both China and India are building breeder reactors. The Indian 500 MWe Prototype Fast Breeder Reactor is in the commissioning phase, with plans to build more. Another alternative to fast-neutron breeders are thermal-neutron breeder reactors that use uranium-233 bred from thorium as fission fuel in the thorium fuel cycle. Thorium is about 3.5 times more common than uranium in the Earth's crust, and has different geographic characteristics. India's three-stage nuclear power programme features the use of a thorium fuel cycle in the third stage, as it has abundant thorium reserves but little uranium. Decommissioning Nuclear decommissioning is the process of dismantling a nuclear facility to the point that it no longer requires measures for radiation protection, returning the facility and its parts to a safe enough level to be entrusted for other uses. Due to the presence of radioactive materials, nuclear decommissioning presents technical and economic challenges. The costs of decommissioning are generally spread over the lifetime of a facility and saved in a decommissioning fund. Production Civilian nuclear power supplied 2,602 terawatt hours (TWh) of electricity in 2023, equivalent to about 9% of global electricity generation, and was the second largest low-carbon power source after hydroelectricity. Nuclear power's contribution to global energy production was about 4% in 2023. This is a little more than wind power, which provided 3.5% of global energy in 2023. Nuclear power's share of global electricity production has fallen from 16.5% in 1997, in large part because the economics of nuclear power have become more difficult. there are 415 civilian fission reactors in the world, with a combined electrical capacity of 374 gigawatt (GW). There are also 66 nuclear power reactors under construction and 87 reactors planned, with a combined capacity of 72GW and 84GW, respectively. The United States has the largest fleet of nuclear reactors, generating over 800TWh per year with an average capacity factor of 92%. Most reactors under construction are generation III reactors in Asia. Regional differences in the use of nuclear power are large. The United States produces the most nuclear energy in the world, with nuclear power providing 19% of the electricity it consumes, while France produces the highest percentage of its electrical energy from nuclear reactors65% in 2023. In the European Union, nuclear power provides 22% of the electricity as of 2022. Nuclear power is the single largest low-carbon electricity source in the United States, and accounts for about half of the European Union's low-carbon electricity. Nuclear energy policy differs among European Union countries, and some, such as Austria, Estonia, Ireland and Italy, have no active nuclear power stations. In addition, there were approximately 140 naval vessels using nuclear propulsion in operation, powered by about 180 reactors. These include military and some civilian ships, such as nuclear-powered icebreakers. International research is continuing into additional uses of process heat such as hydrogen production (in support of a hydrogen economy), for desalinating sea water, and for use in district heating systems. Economics The economics of new nuclear power plants is a controversial subject and multi-billion-dollar investments depend on the choice of energy sources. Nuclear power plants typically have high capital costs for building the plant. For this reason, comparison with other power generation methods is strongly dependent on assumptions about construction timescales and capital financing for nuclear plants. Fuel costs account for about 30 percent of the operating costs, while prices are subject to the market. The high cost of construction is one of the biggest challenges for nuclear power plants. A new 1,100MW plant is estimated to cost between US$6 billion to US$9 billion. Nuclear power cost trends show large disparity by nation, design, build rate and the establishment of familiarity in expertise. The only two nations for which data is available that saw cost decreases in the 2000s were India and South Korea. Analysis of the economics of nuclear power must also take into account who bears the risks of future uncertainties. As of 2010, all operating nuclear power plants have been developed by state-owned or regulated electric utility monopolies. Many countries have since liberalized the electricity market where these risks, and the risk of cheaper competitors emerging before capital costs are recovered, are borne by plant suppliers and operators rather than consumers, which leads to a significantly different evaluation of the economics of new nuclear power plants. The levelized cost of electricity (LCOE) from a new nuclear power plant is estimated to be 69USD/MWh, according to an analysis by the International Energy Agency and the OECD Nuclear Energy Agency. This represents the median cost estimate for an nth-of-a-kind nuclear power plant to be completed in 2025, at a discount rate of 7%. Nuclear power was found to be the least-cost option among dispatchable technologies. Variable renewables can generate cheaper electricity: the median cost of onshore wind power was estimated to be 50USD/MWh, and utility-scale solar power 56USD/MWh. At the assumed CO2 emission cost of 30USD/ton, power from coal (88USD/MWh) and gas (71USD/MWh) is more expensive than low-carbon technologies. Electricity from long-term operation of nuclear power plants by lifetime extension was found to be the least-cost option, at 32USD/MWh. Measures to mitigate global warming, such as a carbon tax or carbon emissions trading, may favor the economics of nuclear power. Extreme weather events, including events made more severe by climate change, are decreasing all energy source reliability including nuclear energy by a small degree, depending on location siting. New small modular reactors, such as those developed by NuScale Power, are aimed at reducing the investment costs for new construction by making the reactors smaller and modular, so that they can be built in a factory. Certain designs had considerable early positive economics, such as the CANDU, which realized a much higher capacity factor and reliability when compared to generation II light water reactors up to the 1990s. Nuclear power plants, though capable of some grid-load following, are typically run as much as possible to keep the cost of the generated electrical energy as low as possible, supplying mostly base-load electricity. Due to the on-line refueling reactor design, PHWRs (of which the CANDU design is a part) continue to hold many world record positions for longest continual electricity generation, often over 800 days. The specific record as of 2019 is held by a PHWR at Kaiga Atomic Power Station, generating electricity continuously for 962 days. Costs not considered in LCOE calculations include funds for research and development, and disasters (the Fukushima disaster is estimated to cost taxpayers ≈$187 billion). In some cases, Governments were found to force "consumers to pay upfront for potential cost overruns" or subsidize uneconomic nuclear energy or be required to do so. Nuclear operators are liable to pay for the waste management in the European Union. In the U.S., the Congress reportedly decided 40 years ago that the nation, and not private companies, would be responsible for storing radioactive waste with taxpayers paying for the costs. The World Nuclear Waste Report 2019 found that "even in countries in which the polluter-pays-principle is a legal requirement, it is applied incompletely" and notes the case of the German Asse II deep geological disposal facility, where the retrieval of large amounts of waste has to be paid for by taxpayers. Similarly, other forms of energy, including fossil fuels and renewables, have a portion of their costs covered by governments. Use in space The most common use of nuclear power in space is the use of radioisotope thermoelectric generators, which use radioactive decay to generate power. These power generators are relatively small scale (few kW), and they are mostly used to power space missions and experiments for long periods where solar power is not available in sufficient quantity, such as in the Voyager 2 space probe. A few space vehicles have been launched using nuclear reactors: 34 reactors belong to the Soviet RORSAT series and one was the American SNAP-10A. Both fission and fusion appear promising for space propulsion applications, generating higher mission velocities with less reaction mass. Safety Nuclear power plants have three unique characteristics that affect their safety, as compared to other power plants. Firstly, intensely radioactive materials are present in a nuclear reactor. Their release to the environment could be hazardous. Secondly, the fission products, which make up most of the intensely radioactive substances in the reactor, continue to generate a significant amount of decay heat even after the fission chain reaction has stopped. If the heat cannot be removed from the reactor, the fuel rods may overheat and release radioactive materials. Thirdly, a criticality accident (a rapid increase of the reactor power) is possible in certain reactor designs if the chain reaction cannot be controlled. These three characteristics have to be taken into account when designing nuclear reactors. All modern reactors are designed so that an uncontrolled increase of the reactor power is prevented by natural feedback mechanisms, a concept known as negative void coefficient of reactivity. If the temperature or the amount of steam in the reactor increases, the fission rate inherently decreases. The chain reaction can also be manually stopped by inserting control rods into the reactor core. Emergency core cooling systems (ECCS) can remove the decay heat from the reactor if normal cooling systems fail. If the ECCS fails, multiple physical barriers limit the release of radioactive materials to the environment even in the case of an accident. The last physical barrier is the large containment building. With a death rate of 0.03 per TWh, nuclear power is the second safest energy source per unit of energy generated, after solar power, in terms of mortality when the historical track-record is considered. Energy produced by coal, petroleum, natural gas and hydropower has caused more deaths per unit of energy generated due to air pollution and energy accidents. This is found when comparing the immediate deaths from other energy sources to both the immediate and the latent, or predicted, indirect cancer deaths from nuclear energy accidents. When the direct and indirect fatalities (including fatalities resulting from the mining and air pollution) from nuclear power and fossil fuels are compared, the use of nuclear power has been calculated to have prevented about 1.84 million deaths from air pollution between 1971 and 2009, by reducing the proportion of energy that would otherwise have been generated by fossil fuels. Following the 2011 Fukushima nuclear disaster, it has been estimated that if Japan had never adopted nuclear power, accidents and pollution from coal or gas plants would have caused more lost years of life. Serious impacts of nuclear accidents are often not directly attributable to radiation exposure, but rather social and psychological effects. Evacuation and long-term displacement of affected populations created problems for many people, especially the elderly and hospital patients. Forced evacuation from a nuclear accident may lead to social isolation, anxiety, depression, psychosomatic medical problems, reckless behavior, and suicide. A comprehensive 2005 study on the aftermath of the Chernobyl disaster concluded that the mental health impact is the largest public health problem caused by the accident. Frank N. von Hippel, an American scientist, commented that a disproportionate fear of ionizing radiation (radiophobia) could have long-term psychological effects on the population of contaminated areas following the Fukushima disaster. Accidents Some serious nuclear and radiation accidents have occurred. The severity of nuclear accidents is generally classified using the International Nuclear Event Scale (INES) introduced by the International Atomic Energy Agency (IAEA). The scale ranks anomalous events or accidents on a scale from 0 (a deviation from normal operation that poses no safety risk) to 7 (a major accident with widespread effects). There have been three accidents of level 5 or higher in the civilian nuclear power industry, two of which, the Chernobyl accident and the Fukushima accident, are ranked at level 7. The first major nuclear accidents were the Kyshtym disaster in the Soviet Union and the Windscale fire in the United Kingdom, both in 1957. The first major accident at a nuclear reactor in the USA occurred in 1961 at the SL-1, a U.S. Army experimental nuclear power reactor at the Idaho National Laboratory. An uncontrolled chain reaction resulted in a steam explosion which killed the three crew members and caused a meltdown. Another serious accident happened in 1968, when one of the two liquid-metal-cooled reactors on board the underwent a fuel element failure, with the emission of gaseous fission products into the surrounding air, resulting in 9 crew fatalities and 83 injuries. The Fukushima Daiichi nuclear accident was caused by the 2011 Tohoku earthquake and tsunami. The accident has not caused any radiation-related deaths but resulted in radioactive contamination of surrounding areas. The difficult cleanup operation is expected to cost tens of billions of dollars over 40 or more years. The Three Mile Island accident in 1979 was a smaller scale accident, rated at INES level 5. There were no direct or indirect deaths caused by the accident. The impact of nuclear accidents is controversial. According to Benjamin K. Sovacool, fission energy accidents ranked first among energy sources in terms of their total economic cost, accounting for 41% of all property damage attributed to energy accidents. Another analysis found that coal, oil, liquid petroleum gas and hydroelectric accidents (primarily due to the Banqiao Dam disaster) have resulted in greater economic impacts than nuclear power accidents. The study compares latent cancer deaths attributable to nuclear power with immediate deaths from other energy sources per unit of energy generated, and does not include fossil fuel related cancer and other indirect deaths created by the use of fossil fuel consumption in its "severe accident" (an accident with more than five fatalities) classification. The Chernobyl accident in 1986 caused approximately 50 deaths from direct and indirect effects, and some temporary serious injuries from acute radiation syndrome. The future predicted mortality from increases in cancer rates is estimated at 4000 in the decades to come. However, the costs have been large and are increasing. Nuclear power works under an insurance framework that limits or structures accident liabilities in accordance with national and international conventions. It is often argued that this potential shortfall in liability represents an external cost not included in the cost of nuclear electricity. This cost is small, amounting to about 0.1% of the levelized cost of electricity, according to a study by the Congressional Budget Office in the United States. These beyond-regular insurance costs for worst-case scenarios are not unique to nuclear power. Hydroelectric power plants are similarly not fully insured against a catastrophic event such as dam failures. For example, the failure of the Banqiao Dam caused the death of an estimated 30,000 to 200,000 people, and 11 million people lost their homes. As private insurers base dam insurance premiums on limited scenarios, major disaster insurance in this sector is likewise provided by the state. Attacks and sabotage Terrorists could target nuclear power plants in an attempt to release radioactive contamination into the community. The United States 9/11 Commission has said that nuclear power plants were potential targets originally considered for the September 11, 2001 attacks. An attack on a reactor's spent fuel pool could also be serious, as these pools are less protected than the reactor core. The release of radioactivity could lead to thousands of near-term deaths and greater numbers of long-term fatalities. In the United States, the Nuclear Regulatory Commission carries out "Force on Force" (FOF) exercises at all nuclear power plant sites at least once every three years. In the United States, plants are surrounded by a double row of tall fences which are electronically monitored. The plant grounds are patrolled by a sizeable force of armed guards. Insider sabotage is also a threat because insiders can observe and work around security measures. Successful insider crimes depended on the perpetrators' observation and knowledge of security vulnerabilities. A fire caused 5–10 million dollars worth of damage to New York's Indian Point Energy Center in 1971. The arsonist was a plant maintenance worker. Proliferation Nuclear proliferation is the spread of nuclear weapons, fissionable material, and weapons-related nuclear technology to states that do not already possess nuclear weapons. Many technologies and materials associated with the creation of a nuclear power program have a dual-use capability, in that they can also be used to make nuclear weapons. For this reason, nuclear power presents proliferation risks. Nuclear power program can become a route leading to a nuclear weapon. An example of this is the concern over Iran's nuclear program. The re-purposing of civilian nuclear industries for military purposes would be a breach of the Non-Proliferation Treaty, to which 190 countries adhere. As of April 2012, there are thirty one countries that have civil nuclear power plants, of which nine have nuclear weapons. The vast majority of these nuclear weapons states have produced weapons before commercial nuclear power stations. A fundamental goal for global security is to minimize the nuclear proliferation risks associated with the expansion of nuclear power. The Global Nuclear Energy Partnership was an international effort to create a distribution network in which developing countries in need of energy would receive nuclear fuel at a discounted rate, in exchange for that nation agreeing to forgo their own indigenous development of a uranium enrichment program. The France-based Eurodif/European Gaseous Diffusion Uranium Enrichment Consortium is a program that successfully implemented this concept, with Spain and other countries without enrichment facilities buying a share of the fuel produced at the French-controlled enrichment facility, but without a transfer of technology. Iran was an early participant from 1974 and remains a shareholder of Eurodif via Sofidif. A 2009 United Nations report said that: the revival of interest in nuclear power could result in the worldwide dissemination of uranium enrichment and spent fuel reprocessing technologies, which present obvious risks of proliferation as these technologies can produce fissile materials that are directly usable in nuclear weapons. On the other hand, power reactors can also reduce nuclear weapon arsenals when military-grade nuclear materials are reprocessed to be used as fuel in nuclear power plants. The Megatons to Megawatts Program is considered the single most successful non-proliferation program to date. Up to 2005, the program had processed $8 billion of high enriched, weapons-grade uranium into low enriched uranium suitable as nuclear fuel for commercial fission reactors by diluting it with natural uranium. This corresponds to the elimination of 10,000 nuclear weapons. For approximately two decades, this material generated nearly 10 percent of all the electricity consumed in the United States, or about half of all U.S. nuclear electricity, with a total of around 7,000TWh of electricity produced. In total it is estimated to have cost $17 billion, a "bargain for US ratepayers", with Russia profiting $12 billion from the deal. Much needed profit for the Russian nuclear oversight industry, which after the collapse of the Soviet economy, had difficulties paying for the maintenance and security of the Russian Federation's highly enriched uranium and warheads. The Megatons to Megawatts Program was hailed as a major success by anti-nuclear weapon advocates as it has largely been the driving force behind the sharp reduction in the number of nuclear weapons worldwide since the cold war ended. However, without an increase in nuclear reactors and greater demand for fissile fuel, the cost of dismantling and down blending has dissuaded Russia from continuing their disarmament. As of 2013, Russia appears to not be interested in extending the program. Environmental impact Being a low-carbon energy source with relatively little land-use requirements, nuclear energy can have a positive environmental impact. It also requires a constant supply of significant amounts of water and affects the environment through mining and milling. Its largest potential negative impacts on the environment may arise from its transgenerational risks for nuclear weapons proliferation that may increase risks of their use in the future, risks for problems associated with the management of the radioactive waste such as groundwater contamination, risks for accidents and for risks for various forms of attacks on waste storage sites or reprocessing- and power-plants. However, these remain mostly only risks as historically there have only been few disasters at nuclear power plants with known relatively substantial environmental impacts. Carbon emissions Nuclear power is one of the leading low carbon power generation methods of producing electricity, and in terms of total life-cycle greenhouse gas emissions per unit of energy generated, has emission values comparable to or lower than renewable energy. A 2014 analysis of the carbon footprint literature by the Intergovernmental Panel on Climate Change (IPCC) reported that the embodied total life-cycle emission intensity of nuclear power has a median value of 12g eq/kWh, which is the lowest among all commercial baseload energy sources. This is contrasted with coal and natural gas at 820 and 490 g eq/kWh. As of 2021, nuclear reactors worldwide have helped avoid the emission of 72 billion tonnes of carbon dioxide since 1970, compared to coal-fired electricity generation, according to a report. Radiation The average dose from natural background radiation is 2.4 millisievert per year (mSv/a) globally. It varies between 1mSv/a and 13mSv/a, depending mostly on the geology of the location. According to the United Nations (UNSCEAR), regular nuclear power plant operations, including the nuclear fuel cycle, increases this amount by 0.0002mSv/a of public exposure as a global average. The average dose from operating nuclear power plants to the local populations around them is less than 0.0001mSv/a. For comparison, the average dose to those living within of a coal power plant is over three times this dose, at 0.0003mSv/a. Chernobyl resulted in the most affected surrounding populations and male recovery personnel receiving an average initial 50 to 100mSv over a few hours to weeks, while the remaining global legacy of the worst nuclear power plant accident in average exposure is 0.002mSv/a and is continuously dropping at the decaying rate, from the initial high of 0.04mSv per person averaged over the entire populace of the Northern Hemisphere in the year of the accident in 1986. Debate The nuclear power debate concerns the controversy which has surrounded the deployment and use of nuclear fission reactors to generate electricity from nuclear fuel for civilian purposes. Proponents of nuclear energy regard it as a sustainable energy source that reduces carbon emissions and increases energy security by decreasing dependence on other energy sources that are also often dependent on imports. For example, proponents note that annually, nuclear-generated electricity reduces 470 million metric tons of carbon dioxide emissions that would otherwise come from fossil fuels. Additionally, the amount of comparatively low waste that nuclear energy does create is safely disposed of by the large scale nuclear energy production facilities or it is repurposed/recycled for other energy uses. M. King Hubbert, who popularized the concept of peak oil, saw oil as a resource that would run out and considered nuclear energy its replacement. Proponents also claim that the present quantity of nuclear waste is small and can be reduced through the latest technology of newer reactors and that the operational safety record of fission-electricity in terms of deaths is so far "unparalleled". Kharecha and Hansen estimated that "global nuclear power has prevented an average of 1.84 million air pollution-related deaths and 64 gigatonnes of CO2-equivalent (Gt-eq) greenhouse gas (GHG) emissions that would have resulted from fossil fuel burning" and, if continued, it could prevent up to 7 million deaths and 240Gt-eq emissions by 2050. Proponents also bring to attention the opportunity cost of using other forms of electricity. For example, the Environmental Protection Agency estimates that coal kills 30,000 people a year, as a result of its environmental impact, while 60 people died in the Chernobyl disaster. A real world example of impact provided by proponents is the 650,000 ton increase in carbon emissions in the two months following the closure of the Vermont Yankee nuclear plant. Opponents believe that nuclear power poses many threats to people's health and environment such as the risk of nuclear weapons proliferation, long-term safe waste management and terrorism in the future. They also contend that nuclear power plants are complex systems where many things can and have gone wrong. Costs of the Chernobyl disaster amount to ≈$68 billion as of 2019 and are increasing, the Fukushima disaster is estimated to cost taxpayers ~$187 billion, and radioactive waste management is estimated to cost the Eureopean Union nuclear operators ~$250 billion by 2050. However, in countries that already use nuclear energy, when not considering reprocessing, intermediate nuclear waste disposal costs could be relatively fixed to certain but unknown degrees "as the main part of these costs stems from the operation of the intermediate storage facility". Critics find that one of the largest drawbacks to building new nuclear fission power plants are the large construction and operating costs when compared to alternatives of sustainable energy sources. Further costs include ongoing research and development, expensive reprocessing in cases where such is practiced and decommissioning. Proponents note that focussing on the levelized cost of energy (LCOE), however, ignores the value premium associated with 24/7 dispatchable electricity and the cost of storage and backup systems necessary to integrate variable energy sources into a reliable electrical grid. "Nuclear thus remains the dispatchable low-carbon technology with the lowest expected costs in 2025. Only large hydro reservoirs can provide a similar contribution at comparable costs but remain highly dependent on the natural endowments of individual countries." Overall, many opponents find that nuclear energy cannot meaningfully contribute to climate change mitigation. In general, they find it to be, too dangerous, too expensive, to take too long for deployment, to be an obstacle to achieving a transition towards sustainability and carbon-neutrality, effectively being a distracting competition for resources (i.e. human, financial, time, infrastructure and expertise) for the deployment and development of alternative, sustainable, energy system technologies (such as for wind, ocean and solar – including e.g. floating solar – as well as ways to manage their intermittency other than nuclear baseload generation such as dispatchable generation, renewables-diversification, super grids, flexible energy demand and supply regulating smart grids and energy storage technologies). Nevertheless, there is ongoing research and debate over costs of new nuclear, especially in regions where i.a. seasonal energy storage is difficult to provide and which aim to phase out fossil fuels in favor of low carbon power faster than the global average. Some find that financial transition costs for a 100% renewables-based European energy system that has completely phased out nuclear energy could be more costly by 2050 based on current technologies (i.e. not considering potential advances in e.g. green hydrogen, transmission and flexibility capacities, ways to reduce energy needs, geothermal energy and fusion energy) when the grid only extends across Europe. Arguments of economics and safety are used by both sides of the debate. Comparison with renewable energy Slowing global warming requires a transition to a low-carbon economy, mainly by burning far less fossil fuel. Limiting global warming to 1.5°C is technically possible if no new fossil fuel power plants are built from 2019. This has generated considerable interest and dispute in determining the best path forward to rapidly replace fossil-based fuels in the global energy mix, with intense academic debate. Sometimes the IEA says that countries without nuclear should develop it as well as their renewable power. Several studies suggest that it might be theoretically possible to cover a majority of world energy generation with new renewable sources. The Intergovernmental Panel on Climate Change (IPCC) has said that if governments were supportive, renewable energy supply could account for close to 80% of the world's energy use by 2050. While in developed nations the economically feasible geography for new hydropower is lacking, with every geographically suitable area largely already exploited, some proponents of wind and solar energy claim these resources alone could eliminate the need for nuclear power. Nuclear power is comparable to, and in some cases lower, than many renewable energy sources in terms of lives lost in the past per unit of electricity delivered. Depending on recycling of renewable energy technologies, nuclear reactors may produce a much smaller volume of waste, although much more toxic, expensive to manage and longer-lived. A nuclear plant also needs to be disassembled and removed and much of the disassembled nuclear plant needs to be stored as low-level nuclear waste for a few decades. The disposal and management of the wide variety of radioactive waste, of which there are over one quarter of a million tons as of 2018, can cause future damage and costs across the world for over or during hundreds of thousands of years – possibly over a million years, due to issues such as leakage, malign retrieval, vulnerability to attacks (including of reprocessing and power plants), groundwater contamination, radiation and leakage to above ground, brine leakage or bacterial corrosion. The European Commission Joint Research Centre found that as of 2021 the necessary technologies for geological disposal of nuclear waste are now available and can be deployed. Corrosion experts noted in 2020 that putting the problem of storage off any longer "isn't good for anyone". Separated plutonium and enriched uranium could be used for nuclear weapons, which – even with the current centralized control (e.g. state-level) and level of prevalence – are considered to be a difficult and substantial global risk for substantial future impacts on human health, lives, civilization and the environment. Speed of transition and investment needed Analysis in 2015 by professor Barry W. Brook and colleagues found that nuclear energy could displace or remove fossil fuels from the electric grid completely within 10 years. This finding was based on the historically modest and proven rate at which nuclear energy was added in France and Sweden during their building programs in the 1980s. In a similar analysis, Brook had earlier determined that 50% of all global energy, including transportation synthetic fuels etc., could be generated within approximately 30 years if the global nuclear fission build rate was identical to historical proven installation rates calculated in GW per year per unit of global GDP (GW/year/$). This is in contrast to the conceptual studies for 100% renewable energy systems, which would require an order of magnitude more costly global investment per year, which has no historical precedent. These renewable scenarios would also need far greater land devoted to onshore wind and onshore solar projects. Brook notes that the "principal limitations on nuclear fission are not technical, economic or fuel-related, but are instead linked to complex issues of societal acceptance, fiscal and political inertia, and inadequate critical evaluation of the real-world constraints facing [the other] low-carbon alternatives." Scientific data indicates that – assuming 2021 emissions levels – humanity only has a carbon budget equivalent to 11 years of emissions left for limiting warming to 1.5°C while the construction of new nuclear reactors took a median of 7.2–10.9 years in 2018–2020, substantially longer than, alongside other measures, scaling up the deployment of wind and solar – especially for novel reactor types – as well as being more risky, often delayed and more dependent on state-support. Researchers have cautioned that novel nuclear technologies – which have been in development since decades, are less tested, have higher proliferation risks, have more new safety problems, are often far from commercialization and are more expensive – are not available in time. Critics of nuclear energy often only oppose nuclear fission energy but not nuclear fusion; however, fusion energy is unlikely to be commercially widespread before 2050. Land use The median land area used by US nuclear power stations per 1GW installed capacity is . To generate the same amount of electricity annually (taking into account capacity factors) from solar PV would require about , and from a wind farm about . Not included in this, is land required for the associated transmission lines, water supply, rail lines, mining and processing of nuclear fuel, and for waste disposal. Research Advanced fission reactor designs Current fission reactors in operation around the world are second or third generation systems, with most of the first-generation systems having been already retired. Research into advanced generation IV reactor types was officially started by the Generation IV International Forum (GIF) based on eight technology goals, including to improve economics, safety, proliferation resistance, natural resource use and the ability to consume existing nuclear waste in the production of electricity. Most of these reactors differ significantly from current operating light water reactors, and are expected to be available for commercial construction after 2030. Hybrid fusion-fission Hybrid nuclear power is a proposed means of generating power by the use of a combination of nuclear fusion and fission processes. The concept dates to the 1950s and was briefly advocated by Hans Bethe during the 1970s, but largely remained unexplored until a revival of interest in 2009, due to delays in the realization of pure fusion. When a sustained nuclear fusion power plant is built, it has the potential to be capable of extracting all the fission energy that remains in spent fission fuel, reducing the volume of nuclear waste by orders of magnitude, and more importantly, eliminating all actinides present in the spent fuel, substances which cause security concerns. Fusion Nuclear fusion reactions have the potential to be safer and generate less radioactive waste than fission. These reactions appear potentially viable, though technically quite difficult and have yet to be created on a scale that could be used in a functional power plant. Fusion power has been under theoretical and experimental investigation since the 1950s. Nuclear fusion research is underway but fusion energy is not likely to be commercially widespread before 2050. Several experimental nuclear fusion reactors and facilities exist. The largest and most ambitious international nuclear fusion project currently in progress is ITER, a large tokamak under construction in France. ITER is planned to pave the way for commercial fusion power by demonstrating self-sustained nuclear fusion reactions with positive energy gain. Construction of the ITER facility began in 2007, but the project has run into many delays and budget overruns. The facility is now not expected to begin operations until the year 2027 – 11 years after initially anticipated. A follow on commercial nuclear fusion power station, DEMO, has been proposed. There are also suggestions for a power plant based upon a different fusion approach, that of an inertial fusion power plant. Fusion-powered electricity generation was initially believed to be readily achievable, as fission-electric power had been. However, the extreme requirements for continuous reactions and plasma containment led to projections being extended by several decades. In 2020, more than 80 years after the first attempts, commercialization of fusion power production was thought to be unlikely before 2050. To enhance and accelerate the development of fusion energy, the United States Department of Energy (DOE) granted $46 million to eight firms, including Commonwealth Fusion Systems and Tokamak Energy Inc, in 2023. This ambitious initiative aims to introduce pilot-scale fusion within a decade.
Technology
Energy
null
22158
https://en.wikipedia.org/wiki/Nuclear%20proliferation
Nuclear proliferation
Nuclear proliferation is the spread of nuclear weapons, fissionable material, and weapons-applicable nuclear technology and information to nations not recognized as "Nuclear Weapon States" by the Treaty on the Non-Proliferation of Nuclear Weapons, commonly known as the Non-Proliferation Treaty or NPT. Proliferation has been opposed by many nations with and without nuclear weapons, as governments fear that more countries with nuclear weapons will increase the possibility of nuclear warfare (up to and including the so-called countervalue targeting of civilians with nuclear weapons), de-stabilize international or regional relations, or infringe upon the national sovereignty of nation states. Four countries besides the five recognized Nuclear Weapon States have acquired, or are presumed to have acquired, nuclear weapons: India, Pakistan, North Korea, and Israel. None of these four are a party to the NPT, although North Korea acceded to the NPT in 1985, then withdrew in 2003 and conducted its first nuclear test in 2006. One critique of the NPT is that the treaty is discriminatory in the sense that only those countries that tested nuclear weapons before 1968 are recognized as nuclear weapon states while all other states are treated as non-nuclear-weapon states who can only join the treaty if they forswear nuclear weapons. Research into the development of nuclear weapons was initially undertaken during World War II by the United States (in cooperation with the United Kingdom and Canada), Germany, Japan, and the USSR. The United States was the first and is the only country to have used a nuclear weapon in war, when it used two bombs against Japan in August 1945. After surrendering to end the war, Germany and Japan ceased to be involved in any nuclear weapon research. In August 1949, the USSR tested a nuclear weapon, becoming the second country to detonate a nuclear bomb. The United Kingdom first tested a nuclear weapon in October 1952. France first tested a nuclear weapon in 1960. The People's Republic of China detonated a nuclear weapon in 1964. India conducted its first nuclear test in 1974, which prompted Pakistan to develop its own nuclear program and, when India conducted a second series of nuclear tests in 1998, Pakistan followed with a series of tests of its own. In 2006, North Korea conducted its first nuclear test. Non-proliferation Efforts Early efforts to prevent nuclear proliferation involved intense government secrecy, the wartime acquisition of known uranium stores (the Combined Development Trust), and at times even outright sabotage—such as the bombing of a heavy-water facility in Norway thought to be used for a German nuclear program. These efforts began immediately after the discovery of nuclear fission and its military potential. None of these efforts were explicitly public, because the weapon developments themselves were kept secret until the bombing of Hiroshima. Earnest international efforts to promote nuclear non-proliferation began soon after World War II, when the Truman Administration proposed the Baruch Plan of 1946, named after Bernard Baruch, America's first representative to the United Nations Atomic Energy Commission (UNAEC). The Baruch Plan, which drew heavily from the Acheson–Lilienthal Report of 1946, proposed the verifiable dismantlement and destruction of the U.S. nuclear arsenal (which, at that time, was the only nuclear arsenal in the world) after all governments had cooperated successfully to accomplish two things: (1) the establishment of an "international atomic development authority," which would actually own and control all military-applicable nuclear materials and activities, and (2) the creation of a system of automatic sanctions, which not even the U.N. Security Council could veto, and which would proportionately punish states attempting to acquire the capability to make nuclear weapons or fissile material. Baruch's plea for the destruction of nuclear weapons invoked basic moral and religious intuitions. In one part of his address to the UN, Baruch said, "Behind the black portent of the new atomic age lies a hope which, seized upon with faith, can work out our salvation. If we fail, then we have damned every man to be the slave of Fear. Let us not deceive ourselves. We must elect World Peace or World Destruction.... We must answer the world's longing for peace and security." With this remark, Baruch helped launch the field of nuclear ethics, to which many policy experts and scholars have contributed. Although the Baruch Plan enjoyed wide international support, it failed to emerge from the UNAEC because the Soviet Union planned to veto it in the Security Council. Still, it remained official American policy until 1953, when President Eisenhower made his "Atoms for Peace" proposal before the U.N. General Assembly. Eisenhower's proposal led eventually to the creation of the International Atomic Energy Agency (IAEA) in 1957. Under the "Atoms for Peace" program thousands of scientists from around the world were educated in nuclear science and then dispatched home, where many later pursued secret weapons programs in their home country. Efforts to conclude an international agreement to limit the spread of nuclear weapons did not begin until the early 1960s, after four nations (the United States, the Soviet Union, the United Kingdom and France) had acquired nuclear weapons (see List of states with nuclear weapons for more information). Although these efforts stalled in the early 1960s, they renewed once again in 1964, after China detonated a nuclear weapon. In 1968, governments represented at the Eighteen Nation Disarmament Committee (ENDC) finished negotiations on the text of the NPT. In June 1968, the U.N. General Assembly endorsed the NPT with General Assembly Resolution 2373 (XXII), and in July 1968, the NPT opened for signature in Washington, D.C., London and Moscow. The NPT entered into force in March 1970. Since the mid-1970s, the primary focus of non-proliferation efforts has been to maintain, and even increase, international control over the fissile material and specialized technologies necessary to build such devices because these are the most difficult and expensive parts of a nuclear weapons program. The main materials whose generation and distribution are controlled are highly enriched uranium and plutonium. Other than the acquisition of these special materials, the scientific and technical means for weapons construction to develop rudimentary, but working, nuclear explosive devices are considered to be within the reach of industrialized nations. Since its founding by the United Nations in 1957, the International Atomic Energy Agency (IAEA) has promoted two, sometimes contradictory, missions: on the one hand, the Agency seeks to promote and spread internationally the use of civilian nuclear energy; on the other hand, it seeks to prevent, or at least detect, the diversion of civilian nuclear energy to nuclear weapons, nuclear explosive devices or purposes unknown. The IAEA now operates a safeguards system as specified under Article III of the Nuclear Non-Proliferation Treaty (NPT) of 1968, which aims to ensure that civil stocks of uranium and plutonium, as well as facilities and technologies associated with these nuclear materials, are used only for peaceful purposes and do not contribute in any way to proliferation or nuclear weapons programs. It is often argued that the proliferation of nuclear weapons to many other states has been prevented by the extension of assurances and mutual defence treaties to these states by nuclear powers, but other factors, such as national prestige, or specific historical experiences, also play a part in hastening or stopping nuclear proliferation. Dual-Use Technology Dual-use technology refers to the possibility of military use of civilian nuclear power technology. Many technologies and materials associated with the creation of a nuclear power program have a dual-use capability, in that several stages of the nuclear fuel cycle allow diversion of nuclear materials for nuclear weapons. When this happens a nuclear power program can become a route leading to the atomic bomb or a public annex to a secret bomb program. The crisis over Iran's nuclear activities is a case in point. Many UN and US agencies warn that building more nuclear reactors unavoidably increases nuclear proliferation risks. A fundamental goal for American and global security is to minimize the proliferation risks associated with the expansion of nuclear power. If this development is "poorly managed or efforts to contain risks are unsuccessful, the nuclear future will be dangerous". For nuclear power programs to be developed and managed safely and securely, it is important that countries have domestic “good governance” characteristics that will encourage proper nuclear operations and management: These characteristics include low degrees of corruption (to avoid officials selling materials and technology for their own personal gain as occurred with the A.Q. Khan smuggling network in Pakistan), high degrees of political stability (defined by the World Bank as "likelihood that the government will be destabilized or overthrown by unconstitutional or violent means, including motivated violence and terrorism"), high governmental effectiveness scores (a World Bank aggregate measure of "the quality of the civil service and the degree of its independence from political pressures [and] the quality of policy formulation and implementation"), and a strong degree of regulatory competence. International Cooperation Treaty on the Non-Proliferation of Nuclear Weapons At present, 189 countries are States Parties to the Treaty on the Nonproliferation of Nuclear Weapons, more commonly known as the Nuclear Non-Proliferation Treaty or NPT. These include the five Nuclear Weapons States (NWS) recognized by the NPT: the People's Republic of China, France, Russian Federation, the UK, and the United States. Notable non-signatories to the NPT are Israel, Pakistan, and India (the latter two have since tested nuclear weapons, while Israel is considered by most to be an unacknowledged nuclear weapons state). North Korea was once a signatory but withdrew in January 2003. The legality of North Korea's withdrawal is debatable but as of 9 October 2006, North Korea clearly possesses the capability to make a nuclear explosive device. International Atomic Energy Agency The IAEA was established on 29 July 1957 to help nations develop nuclear energy for peaceful purposes. Allied to this role is the administration of safeguards arrangements to provide assurance to the international community that individual countries are honoring their commitments under the treaty. Though established under its own international treaty, the IAEA reports to both the United Nations General Assembly and the Security Council. The IAEA regularly inspects civil nuclear facilities to verify the accuracy of documentation supplied to it. The agency checks inventories, and samples and analyzes materials. Safeguards are designed to deter a diversion of nuclear material by increasing the risk of early detection. They are complemented by controls on the export of sensitive technology from countries such as the UK and the United States through voluntary bodies such as the Nuclear Suppliers Group. The main concern of the IAEA is that uranium not be enriched beyond what is necessary for commercial civil plants, and that plutonium which is produced by nuclear reactors not be refined into a form that would be suitable for bomb production. Scope of safeguards Traditional safeguards are arrangements to account for and control the use of nuclear materials. This verification is a key element in the international system which ensures that uranium in particular is used only for peaceful purposes. Parties to the NPT agree to accept technical safeguard measures applied by the IAEA. These require that operators of nuclear facilities maintain and declare detailed accounting records of all movements and transactions involving nuclear material. Over 550 facilities and several hundred other locations are subject to regular inspection, and their records and the nuclear material being audited. Inspections by the IAEA are complemented by other measures such as surveillance cameras and instrumentation. The inspections act as an alert system providing a warning of the possible diversion of nuclear material from peaceful activities. The system relies on; Material Accountancy – tracking all inward and outward transfers and the flow of materials in any nuclear facility. This includes sampling and analysis of nuclear material, on-site inspections, and review and verification of operating records. Physical Security – restricting access to nuclear materials at the site. Containment and Surveillance – use of seals, automatic cameras and other instruments to detect unreported movement or tampering with nuclear materials, as well as spot checks on-site. All NPT non-weapons states must accept these full-scope safeguards. In the five weapons states plus the non-NPT states (India, Pakistan and Israel), facility-specific safeguards apply. IAEA inspectors regularly visit these facilities to verify completeness and accuracy of records. The terms of the NPT cannot be enforced by the IAEA itself, nor can nations be forced to sign the treaty. In reality, as shown in Iraq and North Korea, safeguards can be backed up by diplomatic, political and economic measures. While traditional safeguards easily verified the correctness of formal declarations by suspect states, in the 1990s attention turned to what might not have been declared. While accepting safeguards at declared facilities, Iraq had set up elaborate equipment elsewhere in an attempt to enrich uranium to weapons-grade. North Korea attempted to use research reactors (not commercial electricity-generating reactors) and a nuclear reprocessing plant to produce some weapons-grade plutonium. The weakness of the NPT regime lay in the fact that no obvious diversion of material was involved. The uranium used as fuel probably came from indigenous sources, and the nuclear facilities were built by the countries themselves without being declared or placed under safeguards. Iraq, as an NPT party, was obliged to declare all facilities but did not do so. Nevertheless, the activities were detected and brought under control using international diplomacy. In Iraq, a military defeat assisted this process. In North Korea, the activities concerned took place before the conclusion of its NPT safeguards agreement. With North Korea, the promised provision of commercial power reactors appeared to resolve the situation for a time, but it later withdrew from the NPT and declared it had nuclear weapons. Additional Protocol In 1993 a program was initiated to strengthen and extend the classical safeguards system, and a model protocol was agreed by the IAEA Board of Governors 1997. The measures boosted the IAEA's ability to detect undeclared nuclear activities, including those with no connection to the civil fuel cycle. Innovations were of two kinds. Some could be implemented on the basis of IAEA's existing legal authority through safeguards agreements and inspections. Others required further legal authority to be conferred through an Additional Protocol. This must be agreed by each non-weapons state with IAEA, as a supplement to any existing comprehensive safeguards agreement. Weapons states have agreed to accept the principles of the model additional protocol. Key elements of the model Additional Protocol: The IAEA is to be given considerably more information on nuclear and nuclear-related activities, including R & D, production of uranium and thorium (regardless of whether it is traded), and nuclear-related imports and exports. IAEA inspectors will have greater rights of access. This will include any suspect location, it can be at short notice (e.g., two hours), and the IAEA can deploy environmental sampling and remote monitoring techniques to detect illicit activities. States must streamline administrative procedures so that IAEA inspectors get automatic visa renewal and can communicate more readily with IAEA headquarters. Further evolution of safeguards is towards evaluation of each state, taking account of its particular situation and the kind of nuclear materials it has. This will involve greater judgement on the part of IAEA and the development of effective methodologies which reassure NPT States. As of 3 July 2015, 146 countries have signed the Additional Protocols and 126 have brought them into force. The IAEA is also applying the measures of the Additional Protocol in Taiwan. Under the Joint Comprehensive Plan of Action, Iran has agreed to implement its protocol provisionally. Among the leading countries that have not signed the Additional Protocol are Egypt, which says it will not sign until Israel accepts comprehensive IAEA safeguards, and Brazil, which opposes making the protocol a requirement for international cooperation on enrichment and reprocessing, but has not ruled out signing. Limitations of safeguards The greatest risk from nuclear weapons proliferation comes from countries that have not joined the NPT and which have significant unsafeguarded nuclear activities; India, Pakistan, and Israel fall within this category. While safeguards apply to some of their activities, others remain beyond scrutiny. A further concern is that countries may develop various sensitive nuclear fuel cycle facilities and research reactors under full safeguards and then subsequently opt out of the NPT. Bilateral agreements, such as insisted upon by Australia and Canada for sale of uranium, address this by including fallback provisions, but many countries are outside the scope of these agreements. If a nuclear-capable country does leave the NPT, it is likely to be reported by the IAEA to the United Nations Security Council, just as if it were in breach of its safeguards agreement. Trade sanctions would then be likely. IAEA safeguards can help ensure that uranium supplied as nuclear fuel and other nuclear supplies do not contribute to nuclear weapons proliferation. In fact, the worldwide application of those safeguards and the substantial world trade in uranium for nuclear electricity make the proliferation of nuclear weapons much less likely. The Additional Protocol, once it is widely in force, will provide credible assurance that there are no undeclared nuclear materials or activities in the states concerned. This will be a major step forward in preventing nuclear proliferation. Other developments The Nuclear Suppliers Group communicated its guidelines, essentially a set of export rules, to the IAEA in 1978. These were to ensure that transfers of nuclear material or equipment would not be diverted to unsafeguarded nuclear fuel cycle or nuclear explosive activities, and formal government assurances to this effect were required from recipients. The Guidelines also recognised the need for physical protection measures in the transfer of sensitive facilities, technology and weapons-usable materials, and strengthened retransfer provisions. The group began with seven members—the United States, the former USSR, the United Kingdom, France, Germany, Canada and Japan—but now includes 46 countries including all five nuclear weapons states. The International Framework for Nuclear Energy Cooperation is an international project involving 25 partner countries, 28 observer and candidate partner countries, and the International Atomic Energy Agency, the Generation IV International Forum, and the European Commission. Its goal is to "[..] provide competitive, commercially-based services as an alternative to a state’s development of costly, proliferation-sensitive facilities, and address other issues associated with the safe and secure management of used fuel and radioactive waste." According to Kenneth D. Bergeron's Tritium on Ice: The Dangerous New Alliance of Nuclear Weapons and Nuclear Power, tritium is not classified as a "special nuclear material" but rather as a by-product. It is seen as an important litmus test on the seriousness of the United States' intention to nuclear disarm. This radioactive, super-heavy, hydrogen isotope is used to boost the efficiency of fissile materials in nuclear weapons. The United States resumed tritium production in 2003 for the first time in 15 years. This could indicate that there is a potential nuclear arms stockpile replacement since the isotope naturally decays. In May 1995, NPT parties reaffirmed their commitment to a Fissile Materials Cut-off Treaty to prohibit the production of any further fissile material for weapons. This aims to complement the Comprehensive Nuclear-Test-Ban Treaty of 1996 (not entered into force as of June 2020) and to codify commitments made by the United States, the UK, France and Russia to cease production of weapons material, as well as putting a similar ban on China. This treaty will also put more pressure on Israel, India and Pakistan to agree to international verification. On 9 August 2005, Ayatollah Ali Khamenei issued a fatwa forbidding the production, stockpiling and use of nuclear weapons. Khamenei's official statement was made at the meeting of the International Atomic Energy Agency (IAEA) in Vienna. As of February 2006 Iran formally announced that uranium enrichment within their borders has continued. Iran claims it is for peaceful purposes but the United Kingdom, France, Germany, and the United States claim the purpose is for nuclear weapon research and construction. Unsanctioned Nuclear Activity or U.N.A NPT Non Signatories India, Pakistan and Israel have been "threshold" countries in terms of the international non-proliferation regime. They possess or are quickly capable of assembling one or more nuclear weapons. They have remained outside the 1970 NPT. They are thus largely excluded from trade in nuclear plants or materials, except for safety-related devices for a few safeguarded facilities. In May 1998 India and Pakistan each exploded several nuclear devices underground. This heightened concerns regarding an arms race between them, with Pakistan involving the People's Republic of China, an acknowledged nuclear weapons state. Both countries are opposed to the NPT as it stands, and India has consistently attacked the Treaty since its inception in 1970 labeling it as a lopsided treaty in favor of the nuclear powers. Relations between the two countries are tense and hostile, and the risks of nuclear conflict between them have long been considered quite high. Kashmir is a prime cause of bilateral tension, its sovereignty being in dispute since 1948. There is a persistent low-level bilateral military conflict due to the alleged backing of insurgency by Pakistan in India, and the infiltration of Pakistani state-backed militants into the Indian state of Jammu and Kashmir, along with the disputed status of Kashmir. Both engaged in a conventional arms race in the 1980s, including sophisticated technology and equipment capable of delivering nuclear weapons. In the 1990s the arms race quickened. In 1994 India reversed a four-year trend of reduced allocations for defence, and despite its much smaller economy, Pakistan was expected to push its own expenditures yet higher. Both have lost their patrons: India, the former USSR, and Pakistan, the United States. But it is the growth and modernization of China's nuclear arsenal and its assistance with Pakistan's nuclear power programme and, reportedly, with missile technology, which exacerbate Indian concerns. In particular, as viewed by Indian strategists, Pakistan is aided by China's People's Liberation Army. India Nuclear power for civil use is well established in India. Its civil nuclear strategy has been directed towards complete independence in the nuclear fuel cycle, necessary because of its outspoken rejection of the NPT. Due to economic and technological isolation of India after the nuclear tests in 1974, India has largely diverted focus on developing and perfecting the fast breeder technology by intensive materials and fuel cycle research at the dedicated center established for research into fast reactor technology, Indira Gandhi Center for Atomic Research (IGCAR) at Kalpakkam, in the southern part of the country. At the moment, India has a small fast breeder reactor and is planning a much larger one (Prototype Fast Breeder Reactor). This self-sufficiency extends from uranium exploration and mining through fuel fabrication, heavy water production, reactor design and construction, to reprocessing and waste management. It is also developing technology to utilise its abundant resources of thorium as a nuclear fuel. India has 14 small nuclear power reactors in commercial operation, two larger ones under construction, and ten more planned. The 14 operating ones (2548 MWe total) comprise: two 150 MWe BWRs from the United States, which started up in 1969, now use locally enriched uranium and are under safeguards, two small Canadian PHWRs (1972 & 1980), also under safeguards, and ten local PHWRs based on Canadian designs, two of 150 and eight 200 MWe. two new 540 MWe and two 700 MWe plants at Tarapur (known as TAPP: Tarapur Atomic Power Station) The two under construction and two of the planned ones are 450 MWe versions of these 200 MWe domestic products. Construction has been seriously delayed by financial and technical problems. In 2001 a final agreement was signed with Russia for the country's first large nuclear power plant, comprising two VVER-1000 reactors, under a Russian-financed US$3 billion contract. The first unit is due to be commissioned in 2007. A further two Russian units are under consideration for the site. Nuclear power supplied 3.1% of India's electricity in 2000. Its weapons material appears to come from a Canadian-designed 40 MW "research" reactor which started up in 1960, well before the NPT, and a 100 MW indigenous unit in operation since 1985. Both use local uranium, as India does not import any nuclear fuel. It is estimated that India may have built up enough weapons-grade plutonium for a hundred nuclear warheads. It is widely believed that the nuclear programs of India and Pakistan used Canadian CANDU reactors to produce fissionable materials for their weapons; however, this is not accurate. Both Canada (by supplying the 40 MW research reactor) and the United States (by supplying 21 tons of heavy water) supplied India with the technology necessary to create a nuclear weapons program, dubbed CIRUS (Canada-India Reactor, United States). Canada sold India the reactor on the condition that the reactor and any by-products would be "employed for peaceful purposes only." . Similarly, the United States sold India heavy water for use in the reactor "only... in connection with research into and the use of atomic energy for peaceful purposes" . India, in violation of these agreements, used the Canadian-supplied reactor and American-supplied heavy water to produce plutonium for their first nuclear explosion, Smiling Buddha. The Indian government controversially justified this, however, by claiming that Smiling Buddha was a "peaceful nuclear explosion." The country has at least three other research reactors including the tiny one which is exploring the use of thorium as a nuclear fuel, by breeding fissile U-233. In addition, an advanced heavy-water thorium cycle is under development. India exploded a nuclear device in 1974, the so-called Smiling Buddha test, which it has consistently claimed was for peaceful purposes. Others saw it as a response to China's nuclear weapons capability. It was then universally perceived, notwithstanding official denials, to possess, or to be able to quickly assemble, nuclear weapons. In 1999 it deployed its own medium-range missile and has developed an intermediate-range missile capable of reaching targets in China's industrial heartland. In 1995 the United States quietly intervened to head off a proposed nuclear test. However, in 1998 there were five more tests in Operation Shakti. These were unambiguously military, including one claimed to be of a sophisticated thermonuclear device, and their declared purpose was "to help in the design of nuclear weapons of different yields and different delivery systems". Indian security policies are driven by: its determination to be recognized as a dominant power in the region its increasing concern with China's expanding nuclear weapons and missile delivery programmes its concern with Pakistan's capability to deliver nuclear weapons deep into India It perceives nuclear weapons as a cost-effective political counter to China's nuclear and conventional weaponry, and the effects of its nuclear weapons policy in provoking Pakistan is, by some accounts, considered incidental. India has had an unhappy relationship with China. After an uneasy ceasefire ended the 1962 war, relations between the two nations were frozen until 1998. Since then a degree of high-level contact has been established and a few elementary confidence-building measures put in place. China still occupies some territory which it captured during the aforementioned war, claimed by India, and India still occupies some territory claimed by China. Its nuclear weapon and missile support for Pakistan is a major bone of contention. American President George W. Bush met with India Prime Minister Manmohan Singh to discuss India's involvement with nuclear weapons. The two countries agreed that the United States would give nuclear power assistance to India. Pakistan Over the years in Pakistan, nuclear power infrastructure has been well established. It is dedicated to the industrial and economic development of the country. Its current nuclear policy is aimed to promote the socio-economic development of its people as a "foremost priority"; and to fulfill energy, economic, and industrial needs from nuclear sources. , there were three operational mega-commercial nuclear power plants while three larger ones were under construction. The nuclear power plants supplied 787 megawatts (MW) (roughly ≈3.6%) of electricity, and the country has projected production of 8800 MW by 2030. Infrastructure established by the IAEA and the U.S. in the 1950s–1960s was based on peaceful research and development and the economic prosperity of the country. Although the civil-sector nuclear power was established in the 1950s, the country has an active nuclear weapons program which was started in the 1970s. The bomb program has its roots after East Pakistan gained its independence through the Bangladesh Liberation War, as the new nation of Bangladesh, after India's successful intervention led to a decisive victory over Pakistan in 1971. This large-scale but clandestine atomic bomb project was directed towards the indigenous development of reactor and military-grade plutonium. In 1974, when India surprised the world with the successful detonation of its own bomb, codename Smiling Buddha, it became "imperative for Pakistan" to pursue weapons research. According to a leading scientist in the program, it became clear that once India detonated their bomb, "Newton's Third Law" came into "operation", from then on it was a classic case of "action and reaction". Earlier efforts were directed towards mastering the plutonium technology from France, but that route was slowed when the plan failed after U.S. intervention to cancel the project. Contrary to popular perception, Pakistan did not forego the "plutonium" route and covertly continued its indigenous research under Munir Ahmad Khan and it succeeded with that route in the early 1980s. Reacting to India's first nuclear weapon test, Prime Minister Zulfikar Ali Bhutto and the country's political and military science circles sensed this test as final and dangerous anticipation to Pakistan's "moral and physical existence." With diplomat Aziz Ahmed on his side, Prime Minister Bhutto launched a serious diplomatic offense and aggressively maintained at the session of the United Nations Security Council: After 1974, Bhutto's government redoubled its effort, this time equally focused on uranium and plutonium. Pakistan had established science directorates in almost all of her embassies in the important countries of the world, with theoretical physicist S.A. Butt being the director. Abdul Qadeer Khan then established a network through Dubai to smuggle URENCO technology to the Engineering Research Laboratories. Earlier, he worked with the Physics Dynamics Research Laboratories (FDO), a subsidiary of the Dutch firm VMF-Stork based in Amsterdam. Later after joining, Urenco, he had access through photographs and documents to the technology. Against popular perception, the technology that Khan had brought from Urenco was based on first generation civil reactor technology, filled with many serious technical errors, though it was an authentic and vital link for the country's gas centrifuge project. After the British Government stopped the British subsidiary of the American Emerson Electric Co. from shipping components to Pakistan, he describes his frustration with a supplier from Germany as: "That man from the German team was unethical. When he did not get the order from us, he wrote a letter to a Labour Party member and questions were asked in [British] Parliament." By 1978, his efforts paid off and made him into a national hero. In early 1996 the next Prime Minister of Pakistan Benazir Bhutto made it clear that "if India conducts a nuclear test, Pakistan could be forced to "follow suit". In 1997, her statement was echoed by Prime Minister Nawaz Sharif who maintained that "since 1972, [P]akistan had progressed significantly, and we have left that stage (developmental) far behind. Pakistan will not be made a "hostage" to India by signing the CTBT, before (India).!" In May 1998, within weeks of India's nuclear tests, Pakistan announced that it had conducted six underground tests in the Chagai Hills, five on 28 May and one on 30 May. Seismic events consistent with these claims were recorded. In 2004, the revelation of Khan's efforts led to the exposure of many defunct European consortiums which had defied export restrictions in the 1970s, and of many defunct Dutch companies that exported thousands of centrifuges to Pakistan as early as 1976. Many centrifuge components were apparently manufactured by the Malaysian Scomi Precision Engineering with the assistance of South Asian and German companies, and used a UAE-based computer company as a false front. It was widely believed to have had direct involvement by the Government of Pakistan. This claim could not be verified due to the refusal of that Government to allow the IAEA to interview the alleged head of the nuclear black market, who happened to be no other than Abdul Qadeer Khan. Confessing his crimes a month later on national television, Khan bailed out the Government by taking full responsibility. Independent investigation conducted by International Institute for Strategic Studies (IISS) confirmed that he had control over the import-export deals, and his acquisition activities were largely unsupervised by Pakistan governmental authorities. All of his activities went undetected for several years. He duly confessed to running the atomic proliferation ring from Pakistan to Iran and North Korea. He was immediately given presidential immunity. The exact nature of involvement at the governmental level is still unclear, but the manner in which the government acted cast doubt on the sincerity of Pakistan. However, the contents of Abdul Qadeer Khan's personal diaries present his perspective on the matters related to his activities concerning nuclear secrets. He claimed that he acted only at the order or "instigation" of the Pakistan government. Even when there was no official authorization, Pakistani military knew of Khan's activity according to the contents of the diaries. On one occasion in 1980, a colonel knew of Khan being in touch with Syria's Defense Minister Gen. Mustafa Tlass and Gen. Hikmat Shihabi. Six months later, Khan was warned by Zia Ul Haq to be careful over "nuclear drawings". North Korea The Democratic People's Republic of Korea (or better known as North Korea), joined the NPT in 1985 and had subsequently signed a safeguards agreement with the IAEA. However, it was believed that North Korea was diverting plutonium extracted from the fuel of its reactor at Yongbyon, for use in nuclear weapons. The subsequent confrontation with IAEA on the issue of inspections and suspected violations, resulted in North Korea threatening to withdraw from the NPT in 1993. This eventually led to negotiations with the United States resulting in the Agreed Framework of 1994, which provided for IAEA safeguards being applied to its reactors and spent fuel rods. These spent fuel rods were sealed in canisters by the United States to prevent North Korea from extracting plutonium from them. North Korea had to therefore freeze its plutonium programme. During this period, Pakistan-North Korea cooperation in missile technology transfer was being established. A high-level delegation of Pakistan military visited North Korea in August–September 1992, reportedly to discuss the supply of missile technology to Pakistan. In 1993, PM Benazir Bhutto repeatedly traveled to China, and the paid state visit to North Korea. The visits are believed to be related to the subsequent acquisition technology to developed its Ghauri system by Pakistan. During the period 1992–1994, A.Q. Khan was reported to have visited North Korea thirteen times. The missile cooperation program with North Korea was under Dr. A. Q. Khan Research Laboratories. At this time China was under U.S. pressure not to supply the M Dongfeng series of missiles to Pakistan. It is believed by experts that possibly with Chinese connivance and facilitation, the latter was forced to approach North Korea for missile transfers. Reports indicate that North Korea was willing to supply missile sub-systems including rocket motors, inertial guidance systems, control and testing equipment for US$50 million. It is not clear what North Korea got in return. Joseph S. Bermudez Jr. in Jane's Defence Weekly (27 November 2002) reports that Western analysts had begun to question what North Korea received in payment for the missiles; many suspected it was the nuclear technology. The KRL was in charge of both the uranium program and also of the missile program with North Korea. It is therefore likely during this period that cooperation in nuclear technology between Pakistan and North Korea was initiated. Western intelligence agencies began to notice the exchange of personnel, technology and components between KRL and entities of the North Korean 2nd Economic Committee (responsible for weapons production). A New York Times report on 18 October 2002 quoted U.S. intelligence officials having stated that Pakistan was a major supplier of critical equipment to North Korea. The report added that equipment such as gas centrifuges appeared to have been "part of a barter deal" in which North Korea supplied Pakistan with missiles. Separate reports indicate (The Washington Times, 22 November 2002) that U.S. intelligence had as early as 1999 picked up signs that North Korea was continuing to develop nuclear arms. Other reports also indicate that North Korea had been working covertly to develop an enrichment capability for nuclear weapons for at least five years and had used technology obtained from Pakistan (The Washington Times, 18 October 2002). Israel Israel is also thought to possess an arsenal of potentially up to several hundred nuclear warheads based on estimates of the amount of fissile material produced by Israel. This has never been openly confirmed or denied however, due to Israel's policy of deliberate ambiguity. An Israeli nuclear installation is located about ten kilometers to the south of Dimona, the Negev Nuclear Research Center. Its construction commenced in 1958, with French assistance. The official reason given by the Israeli and French governments was to build a nuclear reactor to power a "desalination plant", in order to "green the Negev". The purpose of the Dimona plant is widely assumed to be the manufacturing of nuclear weapons, and the majority of defense experts have concluded that it does in fact do that. However, the Israeli government refuses to confirm or deny this publicly, a policy it refers to as "ambiguity". Norway sold 20 tonnes of heavy water needed for the reactor to Israel in 1959 and 1960 in a secret deal. There were no "safeguards" required in this deal to prevent the use of heavy water for non-peaceful purposes. The British newspaper Daily Express accused Israel of working on a bomb in 1960. When the United States intelligence community discovered the purpose of the Dimona plant in the early 1960s, it demanded that Israel agree to international inspections. Israel agreed, but on a condition that the U.S., rather than IAEA, inspectors were used, and that Israel would receive advanced notice of all inspections. Some claim that because Israel knew the schedule of the inspectors' visits, it was able to hide the alleged purpose of the site from the inspectors by installing temporary false walls and other devices before each inspection. The inspectors eventually informed the U.S. government that their inspections were useless due to Israeli restrictions on what areas of the facility they could inspect. In 1969, the United States terminated the inspections. In 1986, Mordechai Vanunu, a former technician at the Dimona plant, revealed to the media some evidence of Israel's nuclear program. Israeli Mossad agents arrested him in Italy, drugged him and transported him to Israel. An Israeli court then tried him in secret on charges of treason and espionage, and sentenced him to eighteen years imprisonment. He was freed on 21 April 2004, but was severely limited by the Israeli government. He was arrested again on 11 November 2004, though formal charges were not immediately filed. Comments on photographs taken by Vanunu inside the Negev Nuclear Research Center have been made by prominent scientists. British nuclear weapons scientist Frank Barnaby, who questioned Vanunu over several days, estimated Israel had enough plutonium for about 150 weapons. According to Lieutenant Colonel Warner D. Farr in a report to the USAF Counterproliferation Center, while France was previously a leader in nuclear research "Israel and France were at a similar level of expertise after WWII, and Israeli scientists could make significant contributions to the French effort." In 1986 Francis Perrin, French high-commissioner for atomic energy from 1951 to 1970 stated that in 1949 Israeli scientists were invited to the Saclay nuclear research facility, this cooperation leading to a joint effort including sharing of knowledge between French and Israeli scientists especially those with knowledge from the Manhattan Project. Nuclear arms control in South Asia The public stance of India and Pakistan on non-proliferation differs markedly. Pakistan has initiated a series of regional security proposals. It has repeatedly proposed a nuclear-free zone in South Asia, and has proclaimed its willingness to engage in nuclear disarmament and to sign the Non-Proliferation Treaty if India would do so. It has endorsed a United States proposal for a regional five power conference to consider non-proliferation in South Asia. India has taken the view that solutions to regional security issues should be found at the international rather than the regional level, since its chief concern is with China. It therefore rejects Pakistan's proposals. Instead, the 'Gandhi Plan', put forward in 1988, proposed the revision of the Non-Proliferation Treaty, which it regards as inherently discriminatory in favor of the nuclear-weapon States, and a timetable for complete nuclear weapons disarmament. It endorsed early proposals for a Comprehensive Test Ban Treaty and for an international convention to ban the production of highly enriched uranium and plutonium for weapons purposes, known as the 'cut-off' convention. The United States for some years, especially under the Clinton administration, pursued a variety of initiatives to persuade India and Pakistan to abandon their nuclear weapons programs and to accept comprehensive international safeguards on all their nuclear activities. To this end, the Clinton administration proposed a conference of the five nuclear-weapon states, Japan, Germany, India and Pakistan. India refused this and similar previous proposals, and countered with demands that other potential weapons states, such as Iran and North Korea, should be invited, and that regional limitations would only be acceptable if they were accepted equally by China. The United States would not accept the participation of Iran and North Korea and these initiatives have lapsed. Another, more recent approach, centers on 'capping' the production of fissile material for weapons purposes, which would hopefully be followed by 'roll back'. To this end, India and the United States jointly sponsored a UN General Assembly resolution in 1993 calling for negotiations for a 'cut-off' convention. Should India and Pakistan join such a convention, they would have to agree to halt the production of fissile materials for weapons and to accept international verification on their relevant nuclear facilities (enrichment and reprocessing plants). It appears that India is now prepared to join negotiations regarding such a Cut-off Treaty, under the UN Conference on Disarmament. Bilateral confidence-building measures between India and Pakistan to reduce the prospects of confrontation have been limited. In 1990 each side ratified a treaty not to attack the other's nuclear installations, and at the end of 1991 they provided one another with a list showing the location of all their nuclear plants, even though the respective lists were regarded as not being wholly accurate. Early in 1994 India proposed a bilateral agreement for a 'no first use' of nuclear weapons and an extension of the 'no attack' treaty to cover civilian and industrial targets as well as nuclear installations. Having promoted the Comprehensive Test Ban Treaty since 1954, India dropped its support in 1995 and in 1996 attempted to block the Treaty. Following the 1998 tests the question has been reopened and both Pakistan and India have indicated their intention to sign the CTBT. Indian ratification may be conditional upon the five weapons states agreeing to specific reductions in nuclear arsenals. The UN Conference on Disarmament has also called upon both countries "to accede without delay to the Non-Proliferation Treaty", presumably as non-weapons states. NPT signatories Egypt In 2004 and 2005, Egypt disclosed past undeclared nuclear activities and material to the IAEA. In 2007 and 2008, high-enriched and low-enriched uranium particles were found in environmental samples taken in Egypt. In 2008, the IAEA states Egypt's statements were consistent with its own findings. In May 2009, Reuters reported that the IAEA was conducting further investigation in Egypt. Iran In 2003, the IAEA reported that Iran had been in breach of its obligations to comply with provisions of its safeguard agreement. In 2005, the IAEA Board of Governors voted in a rare non-consensus decision to find Iran in non-compliance with its NPT Safeguards Agreement and to report that non-compliance to the UN Security Council. In response, the UN Security Council passed a series of resolutions citing concerns about the program. Iran's representative to the UN argues sanctions compel Iran to abandon its rights under the Nuclear Nonproliferation Treaty to peaceful nuclear technology. Iran says its uranium enrichment program is exclusively for peaceful purposes and has enriched uranium to "less than 5 percent," consistent with fuel for a nuclear power plant and significantly below the purity of WEU (around 90%) typically used in a weapons program. The director general of the International Atomic Energy Agency, Yukiya Amano, said in 2009 he had not seen any evidence in IAEA official documents that Iran was developing nuclear weapons. Iraq Up to the late 1980s it was generally assumed that any undeclared nuclear activities would have to be based on the diversion of nuclear material from safeguards. States acknowledged the possibility of nuclear activities entirely separate from those covered by safeguards, but it was assumed they would be detected by national intelligence activities. There was no particular effort by IAEA to attempt to detect them. Iraq had been making efforts to secure a nuclear potential since the 1960s. In the late 1970s a specialised plant, Osiraq, was constructed near Baghdad. The plant was attacked during the Iran–Iraq War and was destroyed by Israeli bombers in June 1981. Not until the 1990 NPT Review Conference did some states raise the possibility of making more use of (for example) provisions for "special inspections" in existing NPT Safeguards Agreements. Special inspections can be undertaken at locations other than those where safeguards routinely apply, if there is reason to believe there may be undeclared material or activities. After inspections in Iraq following the UN Gulf War cease-fire resolution showed the extent of Iraq's clandestine nuclear weapons program, it became clear that the IAEA would have to broaden the scope of its activities. Iraq was an NPT Party, and had thus agreed to place all its nuclear material under IAEA safeguards. But the inspections revealed that it had been pursuing an extensive clandestine uranium enrichment programme, as well as a nuclear weapons design programme. The main thrust of Iraq's uranium enrichment program was the development of technology for electromagnetic isotope separation (EMIS) of indigenous uranium. This uses the same principles as a mass spectrometer (albeit on a much larger scale). Ions of uranium-238 and uranium-235 are separated because they describe arcs of different radii when they move through a magnetic field. This process was used in the Manhattan Project to make the highly enriched uranium used in the Hiroshima bomb, but was abandoned soon afterwards. The Iraqis did the basic research work at their nuclear research establishment at Tuwaitha, near Baghdad, and were building two full-scale facilities at Tarmiya and Ash Sharqat, north of Baghdad. However, when the war broke out, only a few separators had been installed at Tarmiya, and none at Ash Sharqat. The Iraqis were also very interested in centrifuge enrichment, and had been able to acquire some components including some carbon-fibre rotors, which they were at an early stage of testing. In May 1998, Newsweek reported that Abdul Qadeer Khan had sent Iraq centrifuge designs, which were apparently confiscated by the UNMOVIC officials. Iraqi officials said "the documents were authentic but that they had not agreed to work with A. Q. Khan, fearing an ISI sting operation, due to strained relations between two countries. The Government of Pakistan and A. Q. Khan strongly denied this allegation whilst the government declared the evidence to be "fraudulent". They were clearly in violation of their NPT and safeguards obligations, and the IAEA Board of Governors ruled to that effect. The UN Security Council then ordered the IAEA to remove, destroy or render harmless Iraq's nuclear weapons capability. This was done by mid-1998, but Iraq then ceased all cooperation with the UN, so the IAEA withdrew from this work. The revelations from Iraq provided the impetus for a very far-reaching reconsideration of what safeguards are intended to achieve. Libya Libya possesses ballistic missiles and previously pursued nuclear weapons under the leadership of Muammar Gaddafi. On 19 December 2003, Gaddafi announced that Libya would voluntarily eliminate all materials, equipment and programs that could lead to internationally proscribed weapons, including weapons of mass destruction and long-range ballistic missiles. Libya signed the Nuclear Non-Proliferation Treaty (NPT) in 1968 and ratified it in 1975, and concluded a safeguards agreement with the International Atomic Energy Agency (IAEA) in 1980. In March 2004, the IAEA Board of Governors welcomed Libya's decision to eliminate its formerly undeclared nuclear program, which it found had violated Libya's safeguards agreement, and approved Libya's Additional Protocol. The United States and the United Kingdom assisted Libya in removing equipment and material from its nuclear weapons program, with independent verification by the IAEA. Myanmar A report in the Sydney Morning Herald and Searchina, a Japanese newspaper, report that two Myanma defectors saying that the State Peace and Development Council junta was secretly building a nuclear reactor and plutonium extraction facility with North Korea's help, with the aim of acquiring its first nuclear bomb in five years. According to the report, "The secret complex, much of it in caves tunnelled into a mountain at Naung Laing in northern Burma, runs parallel to a civilian reactor being built at another site by Russia that both the Russians and Burmese say will be put under international safeguards." In 2002, Myanmar had notified IAEA of its intention to pursue a civilian nuclear programme. Later, Russia announced that it would build a nuclear reactor in Myanmar. There have also been reports that two Pakistani scientists, from the AQ Khan stable, had been dispatched to Myanmar where they had settled down, to help Myanmar's project. Recently, the David Albright-led Institute for Science and International Security (ISIS) rang alarm bells about Myanmar attempting a nuclear project with North Korean help. If true, the full weight of international pressure will be brought against Myanmar, said officials familiar with developments. But equally, the information that has been peddled by the defectors is also "preliminary" and could be used by the west to turn the screws on Myanmar—on democracy and human rights issues—in the run-up to the elections in the country in 2010. During an ASEAN meeting in Thailand in July 2009, US secretary of state Hillary Clinton highlighted concerns of the North Korean link. "We know there are also growing concerns about military cooperation between North Korea and Burma which we take very seriously," Clinton said. However, in 2012, after contact with the American president, Barack Obama, the Burmese leader, Thein Sein, renounced military ties with DPRK (North Korea). North Korea The Democratic People's Republic of Korea (DPRK) acceded to the NPT in 1985 as a condition for the supply of a nuclear power station by the USSR. However, it delayed concluding its NPT Safeguards Agreement with the IAEA, a process which should take only 18 months, until April 1992. During that period, it brought into operation a small gas-cooled, graphite-moderated, natural-uranium (metal) fuelled "Experimental Power Reactor" of about 25 MWt (5 MWe), based on the UK Magnox design. While this was a well-suited design to start a wholly indigenous nuclear reactor development, it also exhibited all the features of a small plutonium production reactor for weapons purposes. North Korea also made substantial progress in the construction of two larger reactors designed on the same principles, a prototype of about 200 MWt (50 MWe), and a full-scale version of about 800 MWt (200 MWe). They made only slow progress; construction halted on both in 1994 and has not resumed. Both reactors have degraded considerably since that time and would take significant efforts to refurbish. In addition, it completed and commissioned a reprocessing plant that makes the Magnox spent nuclear fuel safe, recovering uranium and plutonium. That plutonium, if the fuel was only irradiated to a very low burn-up, would have been in a form very suitable for weapons. Although all these facilities at the Yongbyon Nuclear Scientific Research Center were to be under safeguards, there was always the risk that at some stage, the DPRK would withdraw from the NPT and use the plutonium for weapons. One of the first steps in applying NPT safeguards is for the IAEA to verify the initial stocks of uranium and plutonium to ensure that all the nuclear materials in the country have been declared for safeguards purposes. While undertaking this work in 1992, IAEA inspectors found discrepancies that indicated that the reprocessing plant had been used more often than the DPRK had declared, which suggested that the DPRK could have weapons-grade plutonium which it had not declared to the IAEA. Information passed to the IAEA by a Member State (as required by the IAEA) supported that suggestion by indicating that the DPRK had two undeclared waste or other storage sites. In February 1993 the IAEA called on the DPRK to allow special inspections of the two sites so that the initial stocks of nuclear material could be verified. The DPRK refused, and on 12 March announced its intention to withdraw from the NPT (three months' notice is required). In April 1993 the IAEA Board concluded that the DPRK was in non-compliance with its safeguards obligations and reported the matter to the UN Security Council. In June 1993 the DPRK announced that it had "suspended" its withdrawal from the NPT, but subsequently claimed a "special status" with respect to its safeguards obligations. This was rejected by IAEA. Once the DPRK's non-compliance had been reported to the UN Security Council, the essential part of the IAEA's mission had been completed. Inspections in the DPRK continued, although inspectors were increasingly hampered in what they were permitted to do by the DPRK's claim of a "special status". However, some 8,000 corroding fuel rods associated with the experimental reactor have remained under close surveillance. Following bilateral negotiations between the United States and the DPRK, and the conclusion of the Agreed Framework in October 1994, the IAEA has been given additional responsibilities. The agreement requires a freeze on the operation and construction of the DPRK's plutonium production reactors and their related facilities, and the IAEA is responsible for monitoring the freeze until the facilities are eventually dismantled. The DPRK remains uncooperative with the IAEA verification work and has yet to comply with its safeguards agreement. While Iraq was defeated in a war, allowing the UN the opportunity to seek out and destroy its nuclear weapons programme as part of the cease-fire conditions, the DPRK was not defeated, nor was it vulnerable to other measures, such as trade sanctions. It can scarcely afford to import anything, and sanctions on vital commodities, such as oil, would either be ineffective or risk provoking war. Ultimately, the DPRK was persuaded to stop what appeared to be its nuclear weapons programme in exchange, under the agreed framework, for about US$5 billion in energy-related assistance. This included two 1000 MWe light-water nuclear power reactors based on an advanced U.S. System-80 design. In January 2003 the DPRK withdrew from the NPT. In response, a series of discussions among the DPRK, the United States, and China, a series of six-party talks (the parties being the DPRK, the ROK, China, Japan, the United States and Russia) were held in Beijing; the first beginning in April 2004 concerning North Korea's weapons program. On 10 January 2005, North Korea declared that it was in the possession of nuclear weapons. On 19 September 2005, the fourth round of the Six-Party Talks ended with a joint statement in which North Korea agreed to end its nuclear programs and return to the NPT in exchange for diplomatic, energy and economic assistance. However, by the end of 2005 the DPRK had halted all six-party talks because the United States froze certain DPRK international financial assets such as those in a bank in Macau. On 9 October 2006, North Korea announced that it has performed its first-ever nuclear weapon test. On 18 December 2006, the six-party talks finally resumed. On 13 February 2007, the parties announced "Initial Actions" to implement the 2005 joint statement including shutdown and disablement of North Korean nuclear facilities in exchange for energy assistance. Reacting to UN sanctions imposed after missile tests in April 2009, North Korea withdrew from the six-party talks, restarted its nuclear facilities and conducted a second nuclear test on 25 May 2009. On 12 February 2013, North Korea conducted an underground nuclear explosion with an estimated yield of 6 to 7 kilotonnes. The detonation registered a magnitude 4.9 disturbance in the area around the epicenter. Russia Security of nuclear weapons in Russia remains a matter of concern. According to high-ranking Russian SVR defector Tretyakov, he had a meeting with two Russian businessmen representing a state-created C-W corporation in 1991. They came up with a project of destroying large quantities of chemical wastes collected from Western countries at the island of Novaya Zemlya (a test place for Soviet nuclear weapons) using an underground nuclear blast. The project was rejected by Canadian representatives, but one of the businessmen told Tretyakov that he keeps his own nuclear bomb at his dacha outside Moscow. Tretyakov thought that man was insane, but the "businessmen" (Vladimir K. Dmitriev) replied: "Do not be so naive. With economic conditions the way they are in Russia today, anyone with enough money can buy a nuclear bomb. It's no big deal really". South Africa In 1991, South Africa acceded to the NPT, concluded a comprehensive safeguards agreement with the IAEA, and submitted a report on its nuclear material subject to safeguards. At the time, the state had a nuclear power programme producing nearly 10% of the country's electricity, whereas Iraq and North Korea only had research reactors. The IAEA's initial verification task was complicated by South Africa's announcement that between 1979 and 1989 it built and then dismantled a number of nuclear weapons. South Africa asked the IAEA to verify the conclusion of its weapons programme. In 1995 the IAEA declared that it was satisfied all materials were accounted for and the weapons programme had been terminated and dismantled. South Africa has signed the NPT, and now holds the distinction of being the only known state to have indigenously produced nuclear weapons, and then verifiably dismantled them. Sweden After World War II, Sweden considered building nuclear weapons to deter a Soviet invasion. From 1945 to 1972 the Swedish government ran a clandestine nuclear weapons program under the guise of civilian defense research at the Swedish National Defence Research Institute. By the late 1950s, the work had reached the point where underground testing was feasible. However, at that time the Riksdag prohibited research and development of nuclear weapons, pledging that research should be done only for the purpose of defense against nuclear attack. The option to continue development was abandoned in 1966, and Sweden subsequently signed the Non-Proliferation Treaty in 1968. The program was finally concluded in 1972. Syria On 6 September 2007, Israel bombed an officially unidentified site in Syria which it later asserted was a nuclear reactor under construction (see Operation Outside the Box). The alleged reactor was not asserted to be operational and it was not asserted that nuclear material had been introduced into it. Syria said the site was a military site and was not involved in any nuclear activities. The IAEA requested Syria to provide further access to the site and any other locations where the debris and equipment from the building had been stored. Syria denounced what it called the Western "fabrication and forging of facts" in regards to the incident. IAEA Director General Mohamed ElBaradei criticized the strikes and deplored that information regarding the matter had not been shared with his agency earlier. Taiwan During the Cold War, the United States deployed nuclear weapons at Tainan Air Force Base of Taiwan as part of the United States Taiwan Defense Command. Nonetheless, Taiwan began its own nuclear weapon program under the auspices of the Institute of Nuclear Energy Research (INER) at the Chungshan Institute of Science and Technology since 1967. Taiwan was able to acquire nuclear technology from abroad (including a research reactor from Canada and low-grade plutonium from the United States), which were subject to International Atomic Energy Agency (IAEA) safeguards, but which Taiwan used for its nuclear weapon program. In 1972, US president ordered to remove the nuclear weapons from Taiwan by 1974. Then recognized as the Republic of China, Taiwan ratified the NPT in 1970. After the IAEA found evidences of Taiwan's efforts to produce the weapons-grade plutonium, Taiwan agreed to dismantle its nuclear weapon program under U.S. pressure in September 1976. The nuclear reactor was shut down and the plutonium mostly returned to the U.S. However secret nuclear activities were exposed after the Lieyu massacre by Colonel Chang Hsien-yi, deputy director of INER, who defected to the U.S. in December 1987 and produced a cache of incriminating documents. This program was also halted under the U.S. pressure. Breakout capability For a state that does not possess nuclear weapons, the capability to produce one or more weapons quickly and with little warning is called a breakout capability. , with its civil nuclear infrastructure and experience, has a stockpile of separated plutonium that could be fabricated into weapons relatively quickly. , according to some observers, may be seeking (or have already achieved) a breakout capability, with its stockpile of low-enriched uranium and its capability to enrich further to weapons-grade. Arguments for and against proliferation There has been much debate in the academic study of international security as to the advisability of proliferation. In the late 1950s and early 1960s, Gen. Pierre Marie Gallois of France, an adviser to Charles DeGaulle, argued in books like The Balance of Terror: Strategy for the Nuclear Age (1961) that mere possession of a nuclear arsenal, what the French called the Force de frappe, was enough to ensure deterrence, and thus concluded that the spread of nuclear weapons could increase international stability. Some very prominent neo-realist scholars, such as Kenneth Waltz, Emeritus Professor of Political Science at the University of California, Berkeley and Adjunct Senior Research Scholar at Columbia University, and John Mearsheimer, R. Wendell Harrison Distinguished Service Professor of Political Science at the University of Chicago, continue to argue along the lines of Gallois in a separate development. Specifically, these scholars advocate some forms of nuclear proliferation, arguing that it will decrease the likelihood of war, especially in troubled regions of the world. Aside from the majority opinion which opposes proliferation in any form, there are two schools of thought on the matter: those, like Mearsheimer, who favor selective proliferation, and those such as Waltz, who advocate a laissez-faire attitude to programs like North Korea's. Total proliferation In embryo, Waltz argues that the logic of mutually assured destruction (MAD) should work in all security environments, regardless of historical tensions or recent hostility. He sees the Cold War as the ultimate proof of MAD logic—the only occasion when enmity between two Great Powers did not result in military conflict. This was, he argues, because nuclear weapons promote caution in decision-makers. Neither Washington nor Moscow would risk a nuclear apocalypse to advance territorial or power goals, hence a peaceful stalemate ensued (Waltz and Sagan (2003), p. 24). Waltz believes there to be no reason why this effect would not occur in all circumstances. Todd Sechser and Matthew Fuhrmann find that nuclear weapons do not necessarily lead states to be more successful in coercive diplomacy. They argue that nuclear weapons are useful for defense, but are not effective offensive tools. As a consequence, they write that nuclear proliferation may "be less harmful for international security than many believe" while cautioning that nuclear proliferation may still be harmful due to miscalculation, terrorism and sabotage. Selective proliferation John Mearsheimer would not support Waltz's optimism in the majority of potential instances; however, he has argued for nuclear proliferation as policy in certain places, such as post–Cold War Europe. In two famous articles, Mearsheimer opined that Europe was bound to return to its pre–Cold War environment of regular conflagration and suspicion at some point in the future. He advocated arming both Germany and Ukraine with nuclear weaponry in order to achieve a balance of power between these states in the east and France/UK in the west and predicted that otherwise war would eventually break out on the European continent Russia did invade Ukraine in 2022. Another separate argument against Waltz's open proliferation and in favor of Mearsheimer's selective distribution is the possibility of nuclear terrorism. Some countries included in the aforementioned laissez-faire distribution could predispose the transfer of nuclear materials or a bomb falling into the hands of groups not affiliated with any governments. Such countries would not have the political will or ability to safeguard attempts at devices being transferred to a third party. Not being deterred by self-annihilation, terrorism groups could push forth their own nuclear agendas or be used as shadow fronts to carry out the attack plans by mentioned unstable governments. Arguments against both positions There are numerous arguments presented against both selective and total proliferation, generally targeting the very neorealist assumptions (such as the primacy of military security in state agendas, the weakness of international institutions, and the long-run unimportance of economic integration and globalization to state strategy) its proponents tend to make. With respect to Mearsheimer's specific example of Europe, many economists and neoliberals argue that the economic integration of Europe through the development of the European Union has made war in most of the European continent so disastrous economically so as to serve as an effective deterrent. Constructivists take this one step further, frequently arguing that the development of EU political institutions has led or will lead to the development of a nascent European identity, which most states on the European continent wish to partake in to some degree or another, and which makes all states within or aspiring to be within the EU regard war between them as unthinkable. As for Waltz, the general opinion is that most states are not in a position to safely guard against nuclear use, that he underestimates the long-standing antipathy in many regions, and that weak states will be unable to prevent—or will actively provide for—the disastrous possibility of nuclear terrorism. Waltz has dealt with all of these objections at some point in his work, though some scholars feel he has not adequately responded (e.g.: Betts, 2000). The Learning Channel documentary Doomsday: "On The Brink" illustrated 40 years of U.S. and Soviet nuclear weapons accidents. Even the 1995 Norwegian rocket incident demonstrated a potential scenario in which Russian democratization and military downsizing at the end of the Cold War did not eliminate the danger of accidental nuclear war through command and control errors. After asking: might a future Russian ruler or renegade Russian general be tempted to use nuclear weapons to make foreign policy? The documentary writers revealed a greater danger of Russian security over its nuclear stocks, but especially the ultimate danger of human nature to want the ultimate weapon of mass destruction to exercise political and military power. According to the documentary, the Soviets, Russians, and Americans came very close to global catastrophe. History and military experts agree that proliferation can be slowed, but never stopped (technology cannot be uninvented). Proliferation begets proliferation 'Proliferation begets proliferation' is a concept described by professor of political science Scott Sagan in his article, "Why Do States Build Nuclear Weapons?". This concept can be described as a strategic chain reaction. If one state produces a nuclear weapon it creates almost a domino effect within the region. States in the region will seek to acquire nuclear weapons to balance or eliminate the security threat. Sagan describes this reaction in his article where he states, "Every time one state develops nuclear weapons to balance against its main rival, it also creates a nuclear threat to another region, which then has to initiate its own nuclear weapons program to maintain its national security". Going back through history we can see how this has taken place. When the United States demonstrated that it had nuclear power capabilities after the bombing of Hiroshima and Nagasaki, the Russians started to develop their program in preparation for the Cold War. With the Russian military buildup, France and the United Kingdom perceived this as a security threat and therefore they pursued nuclear weapons (Sagan, p. 71). Even though proliferation causes proliferation, this does not guarantee that other states will successfully develop nuclear weapons because the economic stability of a state plays an important role in whether the state will successfully be able to acquire nuclear weapons. The article written by Dong-Jong Joo and Erik Gartzke discusses how the economy of a country determines whether they will successfully acquire nuclear weapons. Iran Former Iranian President Mahmoud Ahmadinejad has been a frequent critic of the concept of "nuclear apartheid" as it has been put into practice by several countries, particularly the United States. In an interview with CNN's Christiane Amanpour, Ahmadinejad said that Iran was "against 'nuclear apartheid,' which means some have the right to possess it, use the fuel, and then sell it to another country for 10 times its value. We're against that. We say clean energy is the right of all countries. But also it is the duty and the responsibility of all countries, including ours, to set up frameworks to stop the proliferation of it." Hours after that interview, he spoke passionately in favor of Iran's right to develop nuclear technology, claiming the nation should have the same liberties. Iran is a signatory of the Nuclear Non-Proliferation Treaty and claims that any work done in regards to nuclear technology is related only to civilian uses, which is acceptable under the treaty. In 2005, the International Atomic Energy Agency found that Iran violated its safeguards obligations under the treaty by performing uranium-enrichment in secret, after which the United Nations Security Council ordered Iran to suspend all uranium-enrichment until July 2015. India India has also been discussed in the context of "nuclear apartheid". India has consistently attempted to pass measures that would call for full international disarmament, however, they have not succeeded due to protests from those states that already have nuclear weapons. In light of this, India viewed nuclear weapons as a necessary right for all nations as long as certain states were still in possession of nuclear weapons. India stated that nuclear issues were directly related to national security. Years before India's first underground nuclear test in 1998, the Comprehensive Nuclear-Test-Ban Treaty was passed. Some have argued that coercive language was used in an attempt to persuade India to sign the treaty, which was pushed for heavily by neighboring China. India viewed the treaty as a means for countries that already had nuclear weapons, primarily the five nations of the United Nations Security Council, to keep their weapons while ensuring that no other nations could develop them. Security guarantees In their article, "The Correlates of Nuclear Proliferation," Sonali Singh and Christopher R. Way argue that states protected by a security guarantee from a great power, particularly if backed by the "nuclear umbrella" of extended deterrence, have less of an incentive to acquire their own nuclear weapons. States that lack such guarantees are more likely to feel their security threatened and so have greater incentives to bolster or assemble nuclear arsenals. As a result, it is then argued that bipolarity may prevent proliferation whereas multipolarity may actually influence proliferation.
Technology
Weapon of mass destruction
null
22194
https://en.wikipedia.org/wiki/Operating%20system
Operating system
An operating system (OS) is system software that manages computer hardware and software resources, and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, peripherals, and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is usually executed directly by the hardware and frequently makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computerfrom cellular phones and video game consoles to web servers and supercomputers. , Android is the most popular operating system with a 46% market share, followed by Microsoft Windows at 26%, iOS and iPadOS at 18%, macOS at 5%, and Linux at 1%. Android, iOS, and iPadOS are mobile operating systems, while Windows, macOS, and Linux are desktop operating systems. Linux distributions are dominant in the server and supercomputing sectors. Other specialized classes of operating systems (special-purpose operating systems), such as embedded and real-time systems, exist for many applications. Security-focused operating systems also exist. Some operating systems have low system requirements (e.g. light-weight Linux distribution). Others may have higher system requirements. Some operating systems require installation or may come pre-installed with purchased computers (OEM-installation), whereas others may run directly from media (i.e. live CD) or flash memory (i.e. USB stick). Definition and purpose An operating system is difficult to define, but has been called "the layer of software that manages a computer's resources for its users and their applications". Operating systems include the software that is always running, called a kernel—but can include other software as well. The two other types of programs that can run on a computer are system programs—which are associated with the operating system, but may not be part of the kernel—and applications—all other software. There are three main purposes that an operating system fulfills: Operating systems allocate resources between different applications, deciding when they will receive central processing unit (CPU) time or space in memory. On modern personal computers, users often want to run several applications at once. In order to ensure that one program cannot monopolize the computer's limited hardware resources, the operating system gives each application a share of the resource, either in time (CPU) or space (memory). The operating system also must isolate applications from each other to protect them from errors and security vulnerabilities in another application's code, but enable communications between different applications. Operating systems provide an interface that abstracts the details of accessing hardware details (such as physical memory) to make things easier for programmers. Virtualization also enables the operating system to mask limited hardware resources; for example, virtual memory can provide a program with the illusion of nearly unlimited memory that exceeds the computer's actual memory. Operating systems provide common services, such as an interface for accessing network and disk devices. This enables an application to be run on different hardware without needing to be rewritten. Which services to include in an operating system varies greatly, and this functionality makes up the great majority of code for most operating systems. Types of operating systems Multicomputer operating systems With multiprocessors multiple CPUs share memory. A multicomputer or cluster computer has multiple CPUs, each of which has its own memory. Multicomputers were developed because large multiprocessors are difficult to engineer and prohibitively expensive; they are universal in cloud computing because of the size of the machine needed. The different CPUs often need to send and receive messages to each other; to ensure good performance, the operating systems for these machines need to minimize this copying of packets. Newer systems are often multiqueue—separating groups of users into separate queues—to reduce the need for packet copying and support more concurrent users. Another technique is remote direct memory access, which enables each CPU to access memory belonging to other CPUs. Multicomputer operating systems often support remote procedure calls where a CPU can call a procedure on another CPU, or distributed shared memory, in which the operating system uses virtualization to generate shared memory that does not physically exist. Distributed systems A distributed system is a group of distinct, networked computers—each of which might have their own operating system and file system. Unlike multicomputers, they may be dispersed anywhere in the world. Middleware, an additional software layer between the operating system and applications, is often used to improve consistency. Although it functions similarly to an operating system, it is not a true operating system. Embedded Embedded operating systems are designed to be used in embedded computer systems, whether they are internet of things objects or not connected to a network. Embedded systems include many household appliances. The distinguishing factor is that they do not load user-installed software. Consequently, they do not need protection between different applications, enabling simpler designs. Very small operating systems might run in less than 10 kilobytes, and the smallest are for smart cards. Examples include Embedded Linux, QNX, VxWorks, and the extra-small systems RIOT and TinyOS. Real-time A real-time operating system is an operating system that guarantees to process events or data by or at a specific moment in time. Hard real-time systems require exact timing and are common in manufacturing, avionics, military, and other similar uses. With soft real-time systems, the occasional missed event is acceptable; this category often includes audio or multimedia systems, as well as smartphones. In order for hard real-time systems be sufficiently exact in their timing, often they are just a library with no protection between applications, such as eCos. Hypervisor A hypervisor is an operating system that runs a virtual machine. The virtual machine is unaware that it is an application and operates as if it had its own hardware. Virtual machines can be paused, saved, and resumed, making them useful for operating systems research, development, and debugging. They also enhance portability by enabling applications to be run on a computer even if they are not compatible with the base operating system. Library A library operating system (libOS) is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with a single application and configuration code to construct a unikernel: a specialized (only the absolute necessary pieces of code are extracted from libraries and bound together ), single address space, machine image that can be deployed to cloud or embedded environments. The operating system code and application code are not executed in separated protection domains (there is only a single application running, at least conceptually, so there is no need to prevent interference between applications) and OS services are accessed via simple library calls (potentially inlining them based on compiler thresholds), without the usual overhead of context switches, in a way similarly to embedded and real-time OSes. Note that this overhead is not negligible: to the direct cost of mode switching it's necessary to add the indirect pollution of important processor structures (like CPU caches, the instruction pipeline, and so on) which affects both user-mode and kernel-mode performance. History The first computers in the late 1940s and 1950s were directly programmed either with plugboards or with machine code inputted on media such as punch cards, without programming languages or operating systems. After the introduction of the transistor in the mid-1950s, mainframes began to be built. These still needed professional operators who manually do what a modern operating system would do, such as scheduling programs to run, but mainframes still had rudimentary operating systems such as Fortran Monitor System (FMS) and IBSYS. In the 1960s, IBM introduced the first series of intercompatible computers (System/360). All of them ran the same operating system—OS/360—which consisted of millions of lines of assembly language that had thousands of bugs. The OS/360 also was the first popular operating system to support multiprogramming, such that the CPU could be put to use on one job while another was waiting on input/output (I/O). Holding multiple jobs in memory necessitated memory partitioning and safeguards against one job accessing the memory allocated to a different one. Around the same time, teleprinters began to be used as terminals so multiple users could access the computer simultaneously. The operating system MULTICS was intended to allow hundreds of users to access a large computer. Despite its limited adoption, it can be considered the precursor to cloud computing. The UNIX operating system originated as a development of MULTICS for a single user. Because UNIX's source code was available, it became the basis of other, incompatible operating systems, of which the most successful were AT&T's System V and the University of California's Berkeley Software Distribution (BSD). To increase compatibility, the IEEE released the POSIX standard for operating system application programming interfaces (APIs), which is supported by most UNIX systems. MINIX was a stripped-down version of UNIX, developed in 1987 for educational uses, that inspired the commercially available, free software Linux. Since 2008, MINIX is used in controllers of most Intel microchips, while Linux is widespread in data centers and Android smartphones. Microcomputers The invention of large scale integration enabled the production of personal computers (initially called microcomputers) from around 1980. For around five years, the CP/M (Control Program for Microcomputers) was the most popular operating system for microcomputers. Later, IBM bought the DOS (Disk Operating System) from Microsoft. After modifications requested by IBM, the resulting system was called MS-DOS ( Disk Operating System) and was widely used on IBM microcomputers. Later versions increased their sophistication, in part by borrowing features from UNIX. Apple's Macintosh was the first popular computer to use a graphical user interface (GUI). The GUI proved much more user friendly than the text-only command-line interface earlier operating systems had used. Following the success of Macintosh, MS-DOS was updated with a GUI overlay called Windows. Windows later was rewritten as a stand-alone operating system, borrowing so many features from another (VAX VMS) that a large legal settlement was paid. In the twenty-first century, Windows continues to be popular on personal computers but has less market share of servers. UNIX operating systems, especially Linux, are the most popular on enterprise systems and servers but are also used on mobile devices and many other computer systems. On mobile devices, Symbian OS was dominant at first, being usurped by BlackBerry OS (introduced 2002) and iOS for iPhones (from 2007). Later on, the open-source Android operating system (introduced 2008), with a Linux kernel and a C library (Bionic) partially based on BSD code, became most popular. Components The components of an operating system are designed to ensure that various parts of a computer function cohesively. With the de facto obsoletion of DOS, all user software must interact with the operating system to access hardware. Kernel The kernel is the part of the operating system that provides protection between different applications and users. This protection is key to improving reliability by keeping errors isolated to one program, as well as security by limiting the power of malicious software and protecting private data, and ensuring that one program cannot monopolize the computer's resources. Most operating systems have two modes of operation: in user mode, the hardware checks that the software is only executing legal instructions, whereas the kernel has unrestricted powers and is not subject to these checks. The kernel also manages memory for other processes and controls access to input/output devices. Program execution The operating system provides an interface between an application program and the computer hardware, so that an application program can interact with the hardware only by obeying rules and procedures programmed into the operating system. The operating system is also a set of services which simplify development and execution of application programs. Executing an application program typically involves the creation of a process by the operating system kernel, which assigns memory space and other resources, establishes a priority for the process in multi-tasking systems, loads program binary code into memory, and initiates execution of the application program, which then interacts with the user and with hardware devices. However, in some systems an application can request that the operating system execute another application within the same process, either as a subroutine or in a separate thread, e.g., the LINK and ATTACH facilities of OS/360 and successors. Interrupts An interrupt (also known as an abort, exception, fault, signal, or trap) provides an efficient way for most operating systems to react to the environment. Interrupts cause the central processing unit (CPU) to have a control flow change away from the currently running program to an interrupt handler, also known as an interrupt service routine (ISR). An interrupt service routine may cause the central processing unit (CPU) to have a context switch. The details of how a computer processes an interrupt vary from architecture to architecture, and the details of how interrupt service routines behave vary from operating system to operating system. However, several interrupt functions are common. The architecture and operating system must: transfer control to an interrupt service routine. save the state of the currently running process. restore the state after the interrupt is serviced. Software interrupt A software interrupt is a message to a process that an event has occurred. This contrasts with a hardware interrupt — which is a message to the central processing unit (CPU) that an event has occurred. Software interrupts are similar to hardware interrupts — there is a change away from the currently running process. Similarly, both hardware and software interrupts execute an interrupt service routine. Software interrupts may be normally occurring events. It is expected that a time slice will occur, so the kernel will have to perform a context switch. A computer program may set a timer to go off after a few seconds in case too much data causes an algorithm to take too long. Software interrupts may be error conditions, such as a malformed machine instruction. However, the most common error conditions are division by zero and accessing an invalid memory address. Users can send messages to the kernel to modify the behavior of a currently running process. For example, in the command-line environment, pressing the interrupt character (usually Control-C) might terminate the currently running process. To generate software interrupts for x86 CPUs, the INT assembly language instruction is available. The syntax is INT X, where X is the offset number (in hexadecimal format) to the interrupt vector table. Signal To generate software interrupts in Unix-like operating systems, the kill(pid,signum) system call will send a signal to another process. pid is the process identifier of the receiving process. signum is the signal number (in mnemonic format) to be sent. (The abrasive name of kill was chosen because early implementations only terminated the process.) In Unix-like operating systems, signals inform processes of the occurrence of asynchronous events. To communicate asynchronously, interrupts are required. One reason a process needs to asynchronously communicate to another process solves a variation of the classic reader/writer problem. The writer receives a pipe from the shell for its output to be sent to the reader's input stream. The command-line syntax is alpha | bravo. alpha will write to the pipe when its computation is ready and then sleep in the wait queue. bravo will then be moved to the ready queue and soon will read from its input stream. The kernel will generate software interrupts to coordinate the piping. Signals may be classified into 7 categories. The categories are: when a process finishes normally. when a process has an error exception. when a process runs out of a system resource. when a process executes an illegal instruction. when a process sets an alarm event. when a process is aborted from the keyboard. when a process has a tracing alert for debugging. Hardware interrupt Input/output (I/O) devices are slower than the CPU. Therefore, it would slow down the computer if the CPU had to wait for each I/O to finish. Instead, a computer may implement interrupts for I/O completion, avoiding the need for polling or busy waiting. Some computers require an interrupt for each character or word, costing a significant amount of CPU time. Direct memory access (DMA) is an architecture feature to allow devices to bypass the CPU and access main memory directly. (Separate from the architecture, a device may perform direct memory access to and from main memory either directly or via a bus.) Input/output Interrupt-driven I/O When a computer user types a key on the keyboard, typically the character appears immediately on the screen. Likewise, when a user moves a mouse, the cursor immediately moves across the screen. Each keystroke and mouse movement generates an interrupt called Interrupt-driven I/O. An interrupt-driven I/O occurs when a process causes an interrupt for every character or word transmitted. Direct memory access Devices such as hard disk drives, solid-state drives, and magnetic tape drives can transfer data at a rate high enough that interrupting the CPU for every byte or word transferred, and having the CPU transfer the byte or word between the device and memory, would require too much CPU time. Data is, instead, transferred between the device and memory independently of the CPU by hardware such as a channel or a direct memory access controller; an interrupt is delivered only when all the data is transferred. If a computer program executes a system call to perform a block I/O write operation, then the system call might execute the following instructions: Set the contents of the CPU's registers (including the program counter) into the process control block. Create an entry in the device-status table. The operating system maintains this table to keep track of which processes are waiting for which devices. One field in the table is the memory address of the process control block. Place all the characters to be sent to the device into a memory buffer. Set the memory address of the memory buffer to a predetermined device register. Set the buffer size (an integer) to another predetermined register. Execute the machine instruction to begin the writing. Perform a context switch to the next process in the ready queue. While the writing takes place, the operating system will context switch to other processes as normal. When the device finishes writing, the device will interrupt the currently running process by asserting an interrupt request. The device will also place an integer onto the data bus. Upon accepting the interrupt request, the operating system will: Push the contents of the program counter (a register) followed by the status register onto the call stack. Push the contents of the other registers onto the call stack. (Alternatively, the contents of the registers may be placed in a system table.) Read the integer from the data bus. The integer is an offset to the interrupt vector table. The vector table's instructions will then: Access the device-status table. Extract the process control block. Perform a context switch back to the writing process. When the writing process has its time slice expired, the operating system will: Pop from the call stack the registers other than the status register and program counter. Pop from the call stack the status register. Pop from the call stack the address of the next instruction, and set it back into the program counter. With the program counter now reset, the interrupted process will resume its time slice. Memory management Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by the programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory. Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen anymore, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system. Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which does not exist in all computers. In both segmentation and paging, certain protected mode registers specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses trigger an interrupt, which causes the CPU to re-enter supervisor mode, placing the kernel in charge. This is called a segmentation violation or Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, the kernel generally resorts to terminating the offending program, and reports the error. Windows versions 3.1 through ME had some level of memory protection, but programs could easily circumvent the need to use it. A general protection fault would be produced, indicating a segmentation violation had occurred; however, the system would often crash anyway. Virtual memory The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks. If a program tries to access memory that is not accessible memory, but nonetheless has been allocated to it, the kernel is interrupted . This kind of interrupt is typically a page fault. When the kernel detects a page fault it generally adjusts the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has been allocated yet. In modern operating systems, memory which is accessed less frequently can be temporarily stored on a disk or other media to make that space available for use by other programs. This is called swapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand. Virtual memory provides the programmer or the user with the perception that there is a much larger amount of RAM in the computer than is really there. Concurrency Concurrency refers to the operating system's ability to carry out multiple tasks simultaneously. Virtually all modern operating systems support concurrency. Threads enable splitting a process' work into multiple parts that can run simultaneously. The number of threads is not limited by the number of processors available. If there are more threads than processors, the operating system kernel schedules, suspends, and resumes threads, controlling when each thread runs and how much CPU time it receives. During a context switch a running thread is suspended, its state is saved into the thread control block and stack, and the state of the new thread is loaded in. Historically, on many systems a thread could run until it relinquished control (cooperative multitasking). Because this model can allow a single thread to monopolize the processor, most operating systems now can interrupt a thread (preemptive multitasking). Threads have their own thread ID, program counter (PC), a register set, and a stack, but share code, heap data, and other resources with other threads of the same process. Thus, there is less overhead to create a thread than a new process. On single-CPU systems, concurrency is switching between processes. Many computers have multiple CPUs. Parallelism with multiple threads running on different CPUs can speed up a program, depending on how much of it can be executed concurrently. File system Permanent storage devices used in twenty-first century computers, unlike volatile dynamic random-access memory (DRAM), are still accessible after a crash or power failure. Permanent (non-volatile) storage is much cheaper per byte, but takes several orders of magnitude longer to access, read, and write. The two main technologies are a hard drive consisting of magnetic disks, and flash memory (a solid-state drive that stores data in electrical circuits). The latter is more expensive but faster and more durable. File systems are an abstraction used by the operating system to simplify access to permanent storage. They provide human-readable filenames and other metadata, increase performance via amortization of accesses, prevent multiple threads from accessing the same section of memory, and include checksums to identify corruption. File systems are composed of files (named collections of data, of an arbitrary size) and directories (also called folders) that list human-readable filenames and other directories. An absolute file path begins at the root directory and lists subdirectories divided by punctuation, while a relative path defines the location of a file from a directory. System calls (which are sometimes wrapped by libraries) enable applications to create, delete, open, and close files, as well as link, read, and write to them. All these operations are carried out by the operating system on behalf of the application. The operating system's efforts to reduce latency include storing recently requested blocks of memory in a cache and prefetching data that the application has not asked for, but might need next. Device drivers are software specific to each input/output (I/O) device that enables the operating system to work without modification over different hardware. Another component of file systems is a dictionary that maps a file's name and metadata to the data block where its contents are stored. Most file systems use directories to convert file names to file numbers. To find the block number, the operating system uses an index (often implemented as a tree). Separately, there is a free space map to track free blocks, commonly implemented as a bitmap. Although any free block can be used to store a new file, many operating systems try to group together files in the same directory to maximize performance, or periodically reorganize files to reduce fragmentation. Maintaining data reliability in the face of a computer crash or hardware failure is another concern. File writing protocols are designed with atomic operations so as not to leave permanent storage in a partially written, inconsistent state in the event of a crash at any point during writing. Data corruption is addressed by redundant storage (for example, RAID—redundant array of inexpensive disks) and checksums to detect when data has been corrupted. With multiple layers of checksums and backups of a file, a system can recover from multiple hardware failures. Background processes are often used to detect and recover from data corruption. Security Security means protecting users from other users of the same computer, as well as from those who seeking remote access to it over a network. Operating systems security rests on achieving the CIA triad: confidentiality (unauthorized users cannot access data), integrity (unauthorized users cannot modify data), and availability (ensuring that the system remains available to authorized users, even in the event of a denial of service attack). As with other computer systems, isolating security domains—in the case of operating systems, the kernel, processes, and virtual machines—is key to achieving security. Other ways to increase security include simplicity to minimize the attack surface, locking access to resources by default, checking all requests for authorization, principle of least authority (granting the minimum privilege essential for performing a task), privilege separation, and reducing shared data. Some operating system designs are more secure than others. Those with no isolation between the kernel and applications are least secure, while those with a monolithic kernel like most general-purpose operating systems are still vulnerable if any part of the kernel is compromised. A more secure design features microkernels that separate the kernel's privileges into many separate security domains and reduce the consequences of a single kernel breach. Unikernels are another approach that improves security by minimizing the kernel and separating out other operating systems functionality by application. Most operating systems are written in C or C++, which create potential vulnerabilities for exploitation. Despite attempts to protect against them, vulnerabilities are caused by buffer overflow attacks, which are enabled by the lack of bounds checking. Hardware vulnerabilities, some of them caused by CPU optimizations, can also be used to compromise the operating system. There are known instances of operating system programmers deliberately implanting vulnerabilities, such as back doors. Operating systems security is hampered by their increasing complexity and the resulting inevitability of bugs. Because formal verification of operating systems may not be feasible, developers use operating system hardening to reduce vulnerabilities, e.g. address space layout randomization, control-flow integrity, access restrictions, and other techniques. There are no restrictions on who can contribute code to open source operating systems; such operating systems have transparent change histories and distributed governance structures. Open source developers strive to work collaboratively to find and eliminate security vulnerabilities, using code review and type checking to expunge malicious code. Andrew S. Tanenbaum advises releasing the source code of all operating systems, arguing that it prevents developers from placing trust in secrecy and thus relying on the unreliable practice of security by obscurity. User interface A user interface (UI) is essential to support human interaction with a computer. The two most common user interface types for any computer are command-line interface, where computer commands are typed, line-by-line, graphical user interface (GUI) using a visual environment, most commonly a combination of the window, icon, menu, and pointer elements, also known as WIMP. For personal computers, including smartphones and tablet computers, and for workstations, user input is typically from a combination of keyboard, mouse, and trackpad or touchscreen, all of which are connected to the operating system with specialized software. Personal computer users who are not software developers or coders often prefer GUIs for both input and output; GUIs are supported by most personal computers. The software to support GUIs is more complex than a command line for input and plain text output. Plain text output is often preferred by programmers, and is easy to support. Operating system development as a hobby A hobby operating system may be classified as one whose code has not been directly derived from an existing operating system, and has few users and active developers. In some cases, hobby development is in support of a "homebrew" computing device, for example, a simple single-board computer powered by a 6502 microprocessor. Or, development may be for an architecture already in widespread use. Operating system development may come from entirely new concepts, or may commence by modeling an existing operating system. In either case, the hobbyist is her/his own developer, or may interact with a small and sometimes unstructured group of individuals who have like interests. Examples of hobby operating systems include Syllable and TempleOS. Diversity of operating systems and portability If an application is written for use on a specific operating system, and is ported to another OS, the functionality required by that application may be implemented differently by that OS (the names of functions, meaning of arguments, etc.) requiring the application to be adapted, changed, or otherwise maintained. This cost in supporting operating systems diversity can be avoided by instead writing applications against software platforms such as Java or Qt. These abstractions have already borne the cost of adaptation to specific operating systems and their system libraries. Another approach is for operating system vendors to adopt standards. For example, POSIX and OS abstraction layers provide commonalities that reduce porting costs. Popular operating systems , Android is the most popular operating system with a 46% market share, followed by Microsoft Windows at 26%, iOS and iPadOS at 18%, macOS at 5%, and Linux at 1%. Android, iOS, and iPadOS are mobile operating systems, while Windows, macOS, and Linux are desktop operating systems. Linux Linux is a free software distributed under the GNU General Public License (GPL), which means that all of its derivatives are legally required to release their source code. Linux was designed by programmers for their own use, thus emphasizing simplicity and consistency, with a small number of basic elements that can be combined in nearly unlimited ways, and avoiding redundancy. Its design is similar to other UNIX systems not using a microkernel. It is written in C and uses UNIX System V syntax, but also supports BSD syntax. Linux supports standard UNIX networking features, as well as the full suite of UNIX tools, while supporting multiple users and employing preemptive multitasking. Initially of a minimalist design, Linux is a flexible system that can work in under 16 MB of RAM, but still is used on large multiprocessor systems. Similar to other UNIX systems, Linux distributions are composed of a kernel, system libraries, and system utilities. Linux has a graphical user interface (GUI) with a desktop, folder and file icons, as well as the option to access the operating system via a command line. Android is a partially open-source operating system closely based on Linux and has become the most widely used operating system by users, due to its popularity on smartphones and, to a lesser extent, embedded systems needing a GUI, such as "smart watches, automotive dashboards, airplane seatbacks, medical devices, and home appliances". Unlike Linux, much of Android is written in Java and uses object-oriented design. Microsoft Windows Windows is a proprietary operating system that is widely used on desktop computers, laptops, tablets, phones, workstations, enterprise servers, and Xbox consoles. The operating system was designed for "security, reliability, compatibility, high performance, extensibility, portability, and international support"—later on, energy efficiency and support for dynamic devices also became priorities. Windows Executive works via kernel-mode objects for important data structures like processes, threads, and sections (memory objects, for example files). The operating system supports demand paging of virtual memory, which speeds up I/O for many applications. I/O device drivers use the Windows Driver Model. The NTFS file system has a master table and each file is represented as a record with metadata. The scheduling includes preemptive multitasking. Windows has many security features; especially important are the use of access-control lists and integrity levels. Every process has an authentication token and each object is given a security descriptor. Later releases have added even more security features.
Technology
Operating systems
null
22203
https://en.wikipedia.org/wiki/Organic%20compound
Organic compound
Some chemical authorities define an organic compound as a chemical compound that contains a carbon–hydrogen or carbon–carbon bond; others consider an organic compound to be any chemical compound that contains carbon. For example, carbon-containing compounds such as alkanes (e.g. methane ) and its derivatives are universally considered organic, but many others are sometimes considered inorganic, such as halides of carbon without carbon-hydrogen and carbon-carbon bonds (e.g. carbon tetrachloride ), and certain compounds of carbon with nitrogen and oxygen (e.g. cyanide ion , hydrogen cyanide , chloroformic acid , carbon dioxide , and carbonate ion ). Due to carbon's ability to catenate (form chains with other carbon atoms), millions of organic compounds are known. The study of the properties, reactions, and syntheses of organic compounds comprise the discipline known as organic chemistry. For historical reasons, a few classes of carbon-containing compounds (e.g., carbonate salts and cyanide salts), along with a few other exceptions (e.g., carbon dioxide, and even hydrogen cyanide despite the fact it contains a carbon-hydrogen bond), are generally considered inorganic. Other than those just named, little consensus exists among chemists on precisely which carbon-containing compounds are excluded, making any rigorous definition of an organic compound elusive. Although organic compounds make up only a small percentage of Earth's crust, they are of central importance because all known life is based on organic compounds. Living things incorporate inorganic carbon compounds into organic compounds through a network of processes (the carbon cycle) that begins with the conversion of carbon dioxide and a hydrogen source like water into simple sugars and other organic molecules by autotrophic organisms using light (photosynthesis) or other sources of energy. Most synthetically-produced organic compounds are ultimately derived from petrochemicals consisting mainly of hydrocarbons, which are themselves formed from the high pressure and temperature degradation of organic matter underground over geological timescales. This ultimate derivation notwithstanding, organic compounds are no longer defined as compounds originating in living things, as they were historically. In chemical nomenclature, an organyl group, frequently represented by the letter R, refers to any monovalent substituent whose open valence is on a carbon atom. Definition For historical reasons discussed below, a few types of carbon-containing compounds, such as carbides, carbonates (excluding carbonate esters), simple oxides of carbon (for example, CO and ) and cyanides are generally considered inorganic compounds. Different forms (allotropes) of pure carbon, such as diamond, graphite, fullerenes and carbon nanotubes are also excluded because they are simple substances composed of a single element and so not generally considered chemical compounds. The word "organic" in this context does not mean "natural". History Vitalism Vitalism was a widespread conception that substances found in organic nature are formed from the chemical elements by the action of a "vital force" or "life-force" (vis vitalis) that only living organisms possess. In the 1810s, Jöns Jacob Berzelius argued that a regulative force must exist within living bodies. Berzelius also contended that compounds could be distinguished by whether they required any organisms in their synthesis (organic compounds) or whether they did not (inorganic compounds). Vitalism taught that formation of these "organic" compounds were fundamentally different from the "inorganic" compounds that could be obtained from the elements by chemical manipulations in laboratories. Vitalism survived for a short period after the formulation of modern ideas about the atomic theory and chemical elements. It first came under question in 1824, when Friedrich Wöhler synthesized oxalic acid, a compound known to occur only in living organisms, from cyanogen. A further experiment was Wöhler's 1828 synthesis of urea from the inorganic salts potassium cyanate and ammonium sulfate. Urea had long been considered an "organic" compound, as it was known to occur only in the urine of living organisms. Wöhler's experiments were followed by many others, in which increasingly complex "organic" substances were produced from "inorganic" ones without the involvement of any living organism, thus disproving vitalism. Modern classification and ambiguities Although vitalism has been discredited, scientific nomenclature retains the distinction between organic and inorganic compounds. The modern meaning of organic compound is any compound that contains a significant amount of carbon—even though many of the organic compounds known today have no connection to any substance found in living organisms. The term carbogenic has been proposed by E. J. Corey as a modern alternative to organic, but this neologism remains relatively obscure. The organic compound L-isoleucine molecule presents some features typical of organic compounds: carbon–carbon bonds, carbon–hydrogen bonds, as well as covalent bonds from carbon to oxygen and to nitrogen. As described in detail below, any definition of organic compound that uses simple, broadly-applicable criteria turns out to be unsatisfactory, to varying degrees. The modern, commonly accepted definition of organic compound essentially amounts to any carbon-containing compound, excluding several classes of substances traditionally considered "inorganic". The list of substances so excluded varies from author to author. Still, it is generally agreed upon that there are (at least) a few carbon-containing compounds that should not be considered organic. For instance, almost all authorities would require the exclusion of alloys that contain carbon, including steel (which contains cementite, ), as well as other metal and semimetal carbides (including "ionic" carbides, e.g, and and "covalent" carbides, e.g. and SiC, and graphite intercalation compounds, e.g. ). Other compounds and materials that are considered 'inorganic' by most authorities include: metal carbonates, simple oxides of carbon (CO, , and arguably, ), the allotropes of carbon, cyanide derivatives not containing an organic residue (e.g., KCN, , BrCN, cyanate anion , etc.), and heavier analogs thereof (e.g., cyaphide anion , , COS; although carbon disulfide is often classed as an organic solvent). Halides of carbon without hydrogen (e.g., and ), phosgene (), carboranes, metal carbonyls (e.g., nickel tetracarbonyl), mellitic anhydride (), and other exotic oxocarbons are also considered inorganic by some authorities. Nickel tetracarbonyl () and other metal carbonyls are often volatile liquids, like many organic compounds, yet they contain only carbon bonded to a transition metal and to oxygen, and are often prepared directly from metal and carbon monoxide. Nickel tetracarbonyl is typically classified as an organometallic compound as it satisfies the broad definition that organometallic chemistry covers all compounds that contain at least one carbon to metal covalent bond; it is unknown whether organometallic compounds form a subset of organic compounds. For example, the evidence of covalent Fe-C bonding in cementite, a major component of steel, places it within this broad definition of organometallic, yet steel and other carbon-containing alloys are seldom regarded as organic compounds. Thus, it is unclear whether the definition of organometallic should be narrowed, whether these considerations imply that organometallic compounds are not necessarily organic, or both. Metal complexes with organic ligands but no carbon-metal bonds (e.g., ) are not considered organometallic; instead, they are called metal-organic compounds (and might be considered organic). The relatively narrow definition of organic compounds as those containing C-H bonds excludes compounds that are (historically and practically) considered organic. Neither urea nor oxalic acid are organic by this definition, yet they were two key compounds in the vitalism debate. However, the IUPAC Blue Book on organic nomenclature specifically mentions urea and oxalic acid as organic compounds. Other compounds lacking C-H bonds but traditionally considered organic include benzenehexol, mesoxalic acid, and carbon tetrachloride. Mellitic acid, which contains no C-H bonds, is considered a possible organic compound in Martian soil. Terrestrially, it, and its anhydride, mellitic anhydride, are associated with the mineral mellite (). A slightly broader definition of the organic compound includes all compounds bearing C-H or C-C bonds. This would still exclude urea. Moreover, this definition still leads to somewhat arbitrary divisions in sets of carbon-halogen compounds. For example, and would be considered by this rule to be "inorganic", whereas , , and would be organic, though these compounds share many physical and chemical properties. Classification Organic compounds may be classified in a variety of ways. One major distinction is between natural and synthetic compounds. Organic compounds can also be classified or subdivided by the presence of heteroatoms, e.g., organometallic compounds, which feature bonds between carbon and a metal, and organophosphorus compounds, which feature bonds between carbon and a phosphorus. Another distinction, based on the size of organic compounds, distinguishes between small molecules and polymers. Natural compounds Natural compounds refer to those that are produced by plants or animals. Many of these are still extracted from natural sources because they would be more expensive to produce artificially. Examples include most sugars, some alkaloids and terpenoids, certain nutrients such as vitamin B12, and, in general, those natural products with large or stereoisometrically complicated molecules present in reasonable concentrations in living organisms. Further compounds of prime importance in biochemistry are antigens, carbohydrates, enzymes, hormones, lipids and fatty acids, neurotransmitters, nucleic acids, proteins, peptides and amino acids, lectins, vitamins, and fats and oils. Synthetic compounds Compounds that are prepared by reaction of other compounds are known as "synthetic". They may be either compounds that are already found in plants/animals or those artificial compounds that do not occur naturally. Most polymers (a category that includes all plastics and rubbers) are organic synthetic or semi-synthetic compounds. Biotechnology Many organic compounds—two examples are ethanol and insulin—are manufactured industrially using organisms such as bacteria and yeast. Typically, the DNA of an organism is altered to express compounds not ordinarily produced by the organism. Many such biotechnology-engineered compounds did not previously exist in nature. Databases The CAS database is the most comprehensive repository for data on organic compounds. The search tool SciFinder is offered. The Beilstein database contains information on 9.8 million substances, covers the scientific literature from 1771 to the present, and is today accessible via Reaxys. Structures and a large diversity of physical and chemical properties are available for each substance, with reference to original literature. PubChem contains 18.4 million entries on compounds and especially covers the field of medicinal chemistry. A great number of more specialized databases exist for diverse branches of organic chemistry. Structure determination The main tools are proton and carbon-13 NMR spectroscopy, IR Spectroscopy, Mass spectrometry, UV/Vis Spectroscopy and X-ray crystallography.
Physical sciences
Chemical compounds: General
null
22205
https://en.wikipedia.org/wiki/Oasis
Oasis
In ecology, an oasis (; : oases ) is a fertile area of a desert or semi-desert environment that sustains plant life and provides habitat for animals. Surface water may be present, or water may only be accessible from wells or underground channels created by humans. In geography, an oasis may be a current or past rest stop on a transportation route, or less-than-verdant location that nonetheless provides access to underground water through deep wells created and maintained by humans. Although they depend on a natural condition, such as the presence of water that may be stored in reservoirs and used for irrigation, most oases, as we know them, are artificial. The word oasis came into English from , from , , which in turn is a direct borrowing from Demotic Egyptian. The word for oasis in the latter-attested Coptic language (the descendant of Demotic Egyptian) is wahe or ouahe which means a "dwelling place". Oasis in Arabic is wāḥa (). Description Oases develop in "hydrologically favored" locations that have attributes such as a high water table, seasonal lakes, or blockaded wadis. Oases are made when sources of freshwater, such as underground rivers or aquifers, irrigate the surface naturally or via man-made wells. The presence of water on the surface or underground is necessary and the local or regional management of this essential resource is strategic, but not sufficient to create such areas: continuous human work and know-how (a technical and social culture) are essential to maintain such ecosystems. Some of the possible human contributions to maintaining an oasis include digging and maintaining wells, digging and maintaining canals, and continuously removing opportunistic plants that threaten to gorge themselves on water and fertility needed to maintain human and animal food supplies. Stereotypically, an oasis has a "central pool of open water surrounded by a ring of water-dependent shrubs and trees…which are in turn encircled by an outlying transition zone to desert plants." Rain showers provide subterranean water to sustain natural oases, such as the Tuat. Substrata of impermeable rock and stone can trap water and retain it in pockets, or on long faulting subsurface ridges or volcanic dikes water can collect and percolate to the surface. Any incidence of water is then used by migrating birds, which also pass seeds with their droppings which will grow at the water's edge forming an oasis. It can also be used to plant crops. Geography Oases in the Middle East and North Africa cover about , however, they support the livelihood of about 10 million inhabitants. The stark ratio of oasis to desert land in the world means that the oasis ecosystem is "relatively minute, rare and precious." There are 90 “major oases” within the Sahara Desert. Some of their fertility may derive from irrigation systems called foggaras, khettaras, lkhttarts, or a variety of other regional names. In some oases systems, there is "a geometrical system of raised channels that release controlled amounts of the water into individual plots, soaking the soil." History Oases often have human histories that are measured in millennia. Archeological digs at Ein Gedi in the Dead Sea Valley have found evidence of settlement dating to 6,000 BC. Al-Ahsa on the Arabian Peninsula shows evidence of human residence dating to the Neolithic. Anthropologically, the oasis is "an area of sedentary life, which associates the city [medina] or village [ksar] with its surrounding feeding source, the palm grove, within a relational and circulatory nomadic system." The location of oases has been of critical importance for trade and transportation routes in desert areas; caravans must travel via oases so that supplies of water and food can be replenished. Thus, political or military control of an oasis has in many cases meant control of trade on a particular route. For example, the oases of Awjila, Ghadames and Kufra, situated in modern-day Libya, have at various times been vital to both north–south and east–west trade in the Sahara Desert. The location of oases also informed the Darb El Arba'īn trade route from Sudan to Egypt, as well as the caravan route from the Niger River to Tangier, Morocco. The Silk Road "traced its course from water hole to water hole, relying on oasis communities such as Turpan in China and Samarkand in Uzbekistan." According to the United Nations, "Oases are at the very heart of the overall development of peri-Saharan countries due to their geographical location and the fact they are preferred migration routes in times of famine or insecurity in the region." Oases in Oman, on the Arabian Peninsula near the Persian Gulf, vary somewhat from the Saharan form. While still located in an arid or semi-arid zone with a date palm overstory, these oases are usually located below plateaus and "watered either by springs or by aflaj, tunnel systems dug into the ground or carved into the rock to tap underground aquifers." This rainwater harvesting system "never developed a serious salinity problem." Palm Oasis In the drylands of southwestern North America, there is a habitat form called Palm Oasis (alternately Palm Series or Oasis Scrub Woodland) that has the native California fan palm as the overstory species. These Palm Oases can be found in California, Arizona, Baja California, and Sonora. Agroforestry People who live in an oasis must manage land and water use carefully. The most important plant in an oasis is the date palm (Phoenix dactylifera L.), which forms the upper layer. These palm trees provide shade for smaller understory trees like apricots, dates, figs, olives, and peach trees, which form the middle layer. Market-garden vegetables, some cereals (such as sorghum, barley, millet, and wheat), and/or mixed animal fodder, are grown in the bottom layer where there is more moisture. The oasis is integrated into its desert environment through an often close association with nomadic transhumant livestock farming (very often pastoral and sedentary populations are clearly distinguished). The fertility of the oasis soil is restored by "cyclic organic inputs of animal origin." In summary, an oasis palm grove is a highly anthropized and irrigated area that supports a traditionally intensive and polyculture-based agriculture. Responding to environmental constraints, the three strata create what is called the "oasis effect". The three layers and all their interaction points create a variety of combinations of "horizontal wind speed, relative air temperature and relative air humidity." The plantings—through a virtuous cycle of wind reduction, increased shade and evapotranspiration—create a microclimate favorable to crops; "measurements taken in different oases have showed that the potential evapotranspiration of the areas was reduced by 30 to 50 percent within the oasis." The keystone date palm trees are "a main income source and staple food for local populations in many countries in which they are cultivated, and have played significant roles in the economy, society, and environment of those countries." Challenges for date palm oasis polycultures include "low rainfall, high temperatures, water resources often high in salt content, and high incidence of pests." Distressed systems Many historic oases have struggled with drought and inadequate maintenance. According to a United Nations report on the future of oases in the Sahara and Sahel, "Increasingly... oases are subject to various pressures, heavily influenced by the effects of climate change, decreasing groundwater levels and a gradual loss of cultural heritage due to a fading historical memory concerning traditional water management techniques. These natural pressures are compounded by demographic pressures and the introduction of modern water pumping techniques that can disrupt traditional resource management schemes, particularly in the North Saharan oases." For example, five historic oases in the Western Desert of Egypt (Kharga, Dakhla, Farafra, Baharyia, and Siwa) once had "flowing spring and wells" but due to the decline of groundwater heads because of overuse for land reclamation projects those water sources are no more and the oases suffer as a result. Morocco has lost two-thirds of its oasis habitat over the last 100 years due to heat, drought, and water scarcity. The Ferkla Oases in Morocco once drew on water from the Ferkla, Sat and Tangarfa Rivers but they are now dry but for a few days a year. List of places called oases New World dryland systems with oasis-like attributes Huacachina, Peru Quitobaquito, Organ Pipe Cactus National Monument, Arizona Kitowok, Sonora, Mexico Fish Springs National Wildlife Refuge in Utah, United States Havasu Falls, Grand Canyon, Arizona Zzyzx in Mojave National Preserve, California Cuatro Ciengas basin, Chihuahuan Desert, Mexico Oasis Spring Ecological Reserve, Salton Sea, California Gallery of oases Practical matters A 1920 USGS publication about watering holes in the deserts of California and Arizona gave this advice for travelers seeking oases:
Physical sciences
Fluvial landforms
null
22208
https://en.wikipedia.org/wiki/Organic%20chemistry
Organic chemistry
Organic chemistry is a subdiscipline within chemistry involving the scientific study of the structure, properties, and reactions of organic compounds and organic materials, i.e., matter in its various forms that contain carbon atoms. Study of structure determines their structural formula. Study of properties includes physical and chemical properties, and evaluation of chemical reactivity to understand their behavior. The study of organic reactions includes the chemical synthesis of natural products, drugs, and polymers, and study of individual organic molecules in the laboratory and via theoretical (in silico) study. The range of chemicals studied in organic chemistry includes hydrocarbons (compounds containing only carbon and hydrogen) as well as compounds based on carbon, but also containing other elements, especially oxygen, nitrogen, sulfur, phosphorus (included in many biochemicals) and the halogens. Organometallic chemistry is the study of compounds containing carbon–metal bonds. Organic compounds form the basis of all earthly life and constitute the majority of known chemicals. The bonding patterns of carbon, with its valence of four—formal single, double, and triple bonds, plus structures with delocalized electrons—make the array of organic compounds structurally diverse, and their range of applications enormous. They form the basis of, or are constituents of, many commercial products including pharmaceuticals; petrochemicals and agrichemicals, and products made from them including lubricants, solvents; plastics; fuels and explosives. The study of organic chemistry overlaps organometallic chemistry and biochemistry, but also with medicinal chemistry, polymer chemistry, and materials science. Educational aspects Organic chemistry is typically taught at the college or university level. It is considered a very challenging course but has also been made accessible to students. History Before the 18th century, chemists generally believed that compounds obtained from living organisms were endowed with a vital force that distinguished them from inorganic compounds. According to the concept of vitalism (vital force theory), organic matter was endowed with a "vital force". During the first half of the nineteenth century, some of the first systematic studies of organic compounds were reported. Around 1816 Michel Chevreul started a study of soaps made from various fats and alkalis. He separated the acids that, in combination with the alkali, produced the soap. Since these were all individual compounds, he demonstrated that it was possible to make a chemical change in various fats (which traditionally come from organic sources), producing new compounds, without "vital force". In 1828 Friedrich Wöhler produced the organic chemical urea (carbamide), a constituent of urine, from inorganic starting materials (the salts potassium cyanate and ammonium sulfate), in what is now called the Wöhler synthesis. Although Wöhler himself was cautious about claiming he had disproved vitalism, this was the first time a substance thought to be organic was synthesized in the laboratory without biological (organic) starting materials. The event is now generally accepted as indeed disproving the doctrine of vitalism. After Wöhler, Justus von Liebig worked on the organization of organic chemistry, being considered one of its principal founders. In 1856, William Henry Perkin, while trying to manufacture quinine, accidentally produced the organic dye now known as Perkin's mauve. His discovery, made widely known through its financial success, greatly increased interest in organic chemistry. A crucial breakthrough for organic chemistry was the concept of chemical structure, developed independently in 1858 by both Friedrich August Kekulé and Archibald Scott Couper. Both researchers suggested that tetravalent carbon atoms could link to each other to form a carbon lattice, and that the detailed patterns of atomic bonding could be discerned by skillful interpretations of appropriate chemical reactions. The era of the pharmaceutical industry began in the last decade of the 19th century when the German company, Bayer, first manufactured acetylsalicylic acid—more commonly known as aspirin. By 1910 Paul Ehrlich and his laboratory group began developing arsenic-based arsphenamine, (Salvarsan), as the first effective medicinal treatment of syphilis, and thereby initiated the medical practice of chemotherapy. Ehrlich popularized the concepts of "magic bullet" drugs and of systematically improving drug therapies. His laboratory made decisive contributions to developing antiserum for diphtheria and standardizing therapeutic serums. Early examples of organic reactions and applications were often found because of a combination of luck and preparation for unexpected observations. The latter half of the 19th century however witnessed systematic studies of organic compounds. The development of synthetic indigo is illustrative. The production of indigo from plant sources dropped from 19,000 tons in 1897 to 1,000 tons by 1914 thanks to the synthetic methods developed by Adolf von Baeyer. In 2002, 17,000 tons of synthetic indigo were produced from petrochemicals. In the early part of the 20th century, polymers and enzymes were shown to be large organic molecules, and petroleum was shown to be of biological origin. The multiple-step synthesis of complex organic compounds is called total synthesis. Total synthesis of complex natural compounds increased in complexity to glucose and terpineol. For example, cholesterol-related compounds have opened ways to synthesize complex human hormones and their modified derivatives. Since the start of the 20th century, complexity of total syntheses has been increased to include molecules of high complexity such as lysergic acid and vitamin B12. The discovery of petroleum and the development of the petrochemical industry spurred the development of organic chemistry. Converting individual petroleum compounds into types of compounds by various chemical processes led to organic reactions enabling a broad range of industrial and commercial products including, among (many) others: plastics, synthetic rubber, organic adhesives, and various property-modifying petroleum additives and catalysts. The majority of chemical compounds occurring in biological organisms are carbon compounds, so the association between organic chemistry and biochemistry is so close that biochemistry might be regarded as in essence a branch of organic chemistry. Although the history of biochemistry might be taken to span some four centuries, fundamental understanding of the field only began to develop in the late 19th century and the actual term biochemistry was coined around the start of 20th century. Research in the field increased throughout the twentieth century, without any indication of slackening in the rate of increase, as may be verified by inspection of abstraction and indexing services such as BIOSIS Previews and Biological Abstracts, which began in the 1920s as a single annual volume, but has grown so drastically that by the end of the 20th century it was only available to the everyday user as an online electronic database. Characterization Since organic compounds often exist as mixtures, a variety of techniques have also been developed to assess purity; chromatography techniques are especially important for this application, and include HPLC and gas chromatography. Traditional methods of separation include distillation, crystallization, evaporation, magnetic separation and solvent extraction. Organic compounds were traditionally characterized by a variety of chemical tests, called "wet methods", but such tests have been largely displaced by spectroscopic or other computer-intensive methods of analysis. Listed in approximate order of utility, the chief analytical methods are: Nuclear magnetic resonance (NMR) spectroscopy is the most commonly used technique, often permitting the complete assignment of atom connectivity and even stereochemistry using correlation spectroscopy. The principal constituent atoms of organic chemistry – hydrogen and carbon – exist naturally with NMR-responsive isotopes, respectively 1H and 13C. Elemental analysis: A destructive method used to determine the elemental composition of a molecule.
Physical sciences
Chemistry
null
22210
https://en.wikipedia.org/wiki/One-time%20pad
One-time pad
In cryptography, the one-time pad (OTP) is an encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as a one-time pad). Then, each bit or character of the plaintext is encrypted by combining it with the corresponding bit or character from the pad using modular addition. The resulting ciphertext will be impossible to decrypt or break if the following four conditions are met: The key must be at least as long as the plaintext. The key must be truly random. The key must never be reused in whole or in part. The key must be kept completely secret by the communicating parties. It has also been mathematically proven that any cipher with the property of perfect secrecy must use keys with effectively the same requirements as OTP keys. Digital versions of one-time pad ciphers have been used by nations for critical diplomatic and military communication, but the problems of secure key distribution make them impractical for most applications. First described by Frank Miller in 1882, the one-time pad was re-invented in 1917. On July 22, 1919, U.S. Patent 1,310,719 was issued to Gilbert Vernam for the XOR operation used for the encryption of a one-time pad. Derived from his Vernam cipher, the system was a cipher that combined a message with a key read from a punched tape. In its original form, Vernam's system was vulnerable because the key tape was a loop, which was reused whenever the loop made a full cycle. One-time use came later, when Joseph Mauborgne recognized that if the key tape were totally random, then cryptanalysis would be impossible. The "pad" part of the name comes from early implementations where the key material was distributed as a pad of paper, allowing the current top sheet to be torn off and destroyed after use. For concealment the pad was sometimes so small that a powerful magnifying glass was required to use it. The KGB used pads of such size that they could fit in the palm of a hand, or in a walnut shell. To increase security, one-time pads were sometimes printed onto sheets of highly flammable nitrocellulose, so that they could easily be burned after use. There is some ambiguity to the term "Vernam cipher" because some sources use "Vernam cipher" and "one-time pad" synonymously, while others refer to any additive stream cipher as a "Vernam cipher", including those based on a cryptographically secure pseudorandom number generator (CSPRNG). History Frank Miller in 1882 was the first to describe the one-time pad system for securing telegraphy. The next one-time pad system was electrical. In 1917, Gilbert Vernam (of AT&T Corporation) invented and later patented in 1919 () a cipher based on teleprinter technology. Each character in a message was electrically combined with a character on a punched paper tape key. Joseph Mauborgne (then a captain in the U.S. Army and later chief of the Signal Corps) recognized that the character sequence on the key tape could be completely random and that, if so, cryptanalysis would be more difficult. Together they invented the first one-time tape system. The next development was the paper pad system. Diplomats had long used codes and ciphers for confidentiality and to minimize telegraph costs. For the codes, words and phrases were converted to groups of numbers (typically 4 or 5 digits) using a dictionary-like codebook. For added security, secret numbers could be combined with (usually modular addition) each code group before transmission, with the secret numbers being changed periodically (this was called superencryption). In the early 1920s, three German cryptographers (Werner Kunze, Rudolf Schauffler, and Erich Langlotz), who were involved in breaking such systems, realized that they could never be broken if a separate randomly chosen additive number was used for every code group. They had duplicate paper pads printed with lines of random number groups. Each page had a serial number and eight lines. Each line had six 5-digit numbers. A page would be used as a work sheet to encode a message and then destroyed. The serial number of the page would be sent with the encoded message. The recipient would reverse the procedure and then destroy his copy of the page. The German foreign office put this system into operation by 1923. A separate notion was the use of a one-time pad of letters to encode plaintext directly as in the example below. Leo Marks describes inventing such a system for the British Special Operations Executive during World War II, though he suspected at the time that it was already known in the highly compartmentalized world of cryptography, as for instance at Bletchley Park. The final discovery was made by information theorist Claude Shannon in the 1940s who recognized and proved the theoretical significance of the one-time pad system. Shannon delivered his results in a classified report in 1945 and published them openly in 1949. At the same time, Soviet information theorist Vladimir Kotelnikov had independently proved the absolute security of the one-time pad; his results were delivered in 1941 in a report that apparently remains classified. There also exists a quantum analogue of the one time pad, which can be used to exchange quantum states along a one-way quantum channel with perfect secrecy, which is sometimes used in quantum computing. It can be shown that a shared secret of at least 2n classical bits is required to exchange an n-qubit quantum state along a one-way quantum channel (by analogue with the result that a key of n bits is required to exchange an n bit message with perfect secrecy). A scheme proposed in 2000 achieves this bound. One way to implement this quantum one-time pad is by dividing the 2n bit key into n pairs of bits. To encrypt the state, for each pair of bits i in the key, one would apply an X gate to qubit i of the state if and only if the first bit of the pair is 1, and apply a Z gate to qubit i of the state if and only if the second bit of the pair is 1. Decryption involves applying this transformation again, since X and Z are their own inverses. This can be shown to be perfectly secret in a quantum setting. Example Suppose Alice wishes to send the message hello to Bob. Assume two pads of paper containing identical random sequences of letters were somehow previously produced and securely issued to both. Alice chooses the appropriate unused page from the pad. The way to do this is normally arranged for in advance, as for instance "use the 12th sheet on 1 May", or "use the next available sheet for the next message". The material on the selected sheet is the key for this message. Each letter from the pad will be combined in a predetermined way with one letter of the message. (It is common, but not required, to assign each letter a numerical value, e.g., a is 0, b is 1, and so on.) In this example, the technique is to combine the key and the message using modular addition, not unlike the Vigenère cipher. The numerical values of corresponding message and key letters are added together, modulo 26. So, if key material begins with XMCKL and the message is hello, then the coding would be done as follows: h e l l o message 7 (h) 4 (e) 11 (l) 11 (l) 14 (o) message + 23 (X) 12 (M) 2 (C) 10 (K) 11 (L) key = 30 16 13 21 25 message + key = 4 (E) 16 (Q) 13 (N) 21 (V) 25 (Z) (message + key) mod 26 E Q N V Z → ciphertext If a number is larger than 25, then the remainder after subtraction of 26 is taken in modular arithmetic fashion. This simply means that if the computations "go past" Z, the sequence starts again at A. The ciphertext to be sent to Bob is thus EQNVZ. Bob uses the matching key page and the same process, but in reverse, to obtain the plaintext. Here the key is subtracted from the ciphertext, again using modular arithmetic: E Q N V Z ciphertext 4 (E) 16 (Q) 13 (N) 21 (V) 25 (Z) ciphertext − 23 (X) 12 (M) 2 (C) 10 (K) 11 (L) key = −19 4 11 11 14 ciphertext – key = 7 (h) 4 (e) 11 (l) 11 (l) 14 (o) ciphertext – key (mod 26) h e l l o → message Similar to the above, if a number is negative, then 26 is added to make the number zero or higher. Thus Bob recovers Alice's plaintext, the message hello. Both Alice and Bob destroy the key sheet immediately after use, thus preventing reuse and an attack against the cipher. The KGB often issued its agents one-time pads printed on tiny sheets of flash paper, paper chemically converted to nitrocellulose, which burns almost instantly and leaves no ash. The classical one-time pad of espionage used actual pads of minuscule, easily concealed paper, a sharp pencil, and some mental arithmetic. The method can be implemented now as a software program, using data files as input (plaintext), output (ciphertext) and key material (the required random sequence). The exclusive or (XOR) operation is often used to combine the plaintext and the key elements, and is especially attractive on computers since it is usually a native machine instruction and is therefore very fast. It is, however, difficult to ensure that the key material is actually random, is used only once, never becomes known to the opposition, and is completely destroyed after use. The auxiliary parts of a software one-time pad implementation present real challenges: secure handling/transmission of plaintext, truly random keys, and one-time-only use of the key. Attempt at cryptanalysis To continue the example from above, suppose Eve intercepts Alice's ciphertext: EQNVZ. If Eve tried every possible key, she would find that the key XMCKL would produce the plaintext hello, but she would also find that the key TQURI would produce the plaintext later, an equally plausible message: 4 (E) 16 (Q) 13 (N) 21 (V) 25 (Z) ciphertext − 19 (T) 16 (Q) 20 (U) 17 (R) 8 (I) possible key = −15 0 −7 4 17 ciphertext-key = 11 (l) 0 (a) 19 (t) 4 (e) 17 (r) ciphertext-key (mod 26) In fact, it is possible to "decrypt" out of the ciphertext any message whatsoever with the same number of characters, simply by using a different key, and there is no information in the ciphertext that will allow Eve to choose among the various possible readings of the ciphertext. If the key is not truly random, it is possible to use statistical analysis to determine which of the plausible keys is the "least" random and therefore more likely to be the correct one. If a key is reused, it will noticeably be the only key that produces sensible plaintexts from both ciphertexts (the chances of some random incorrect key also producing two sensible plaintexts are very slim). Perfect secrecy One-time pads are "information-theoretically secure" in that the encrypted message (i.e., the ciphertext) provides no information about the original message to a cryptanalyst (except the maximum possible length of the message). This is a very strong notion of security first developed during WWII by Claude Shannon and proved, mathematically, to be true for the one-time pad by Shannon at about the same time. His result was published in the Bell System Technical Journal in 1949. If properly used, one-time pads are secure in this sense even against adversaries with infinite computational power. Shannon proved, using information theoretic considerations, that the one-time pad has a property he termed perfect secrecy; that is, the ciphertext C gives absolutely no additional information about the plaintext. This is because (intuitively), given a truly uniformly random key that is used only once, a ciphertext can be translated into any plaintext of the same length, and all are equally likely. Thus, the a priori probability of a plaintext message M is the same as the a posteriori probability of a plaintext message M given the corresponding ciphertext. Conventional symmetric encryption algorithms use complex patterns of substitution and transpositions. For the best of these currently in use, it is not known whether there can be a cryptanalytic procedure that can efficiently reverse (or even partially reverse) these transformations without knowing the key used during encryption. Asymmetric encryption algorithms depend on mathematical problems that are thought to be difficult to solve, such as integer factorization or the discrete logarithm. However, there is no proof that these problems are hard, and a mathematical breakthrough could make existing systems vulnerable to attack. Given perfect secrecy, in contrast to conventional symmetric encryption, the one-time pad is immune even to brute-force attacks. Trying all keys simply yields all plaintexts, all equally likely to be the actual plaintext. Even with a partially known plaintext, brute-force attacks cannot be used, since an attacker is unable to gain any information about the parts of the key needed to decrypt the rest of the message. The parts of the plaintext that are known will reveal only the parts of the key corresponding to them, and they correspond on a strictly one-to-one basis; a uniformly random key's bits will be independent. Quantum cryptography and post-quantum cryptography involve studying the impact of quantum computers on information security. Quantum computers have been shown by Peter Shor and others to be much faster at solving some problems that the security of traditional asymmetric encryption algorithms depends on. The cryptographic algorithms that depend on these problems' difficulty would be rendered obsolete with a powerful enough quantum computer. One-time pads, however, would remain secure, as perfect secrecy does not depend on assumptions about the computational resources of an attacker. Problems Despite Shannon's proof of its security, the one-time pad has serious drawbacks in practice because it requires: Truly random, as opposed to pseudorandom, one-time pad values, which is a non-trivial requirement. Random number generation in computers is often difficult, and pseudorandom number generators are often used for their speed and usefulness for most applications. True random number generators exist, but are typically slower and more specialized. Secure generation and exchange of the one-time pad values, which must be at least as long as the message. This is important because the security of the one-time pad depends on the security of the one-time pad exchange. If an attacker is able to intercept the one-time pad value, they can decrypt messages sent using the one-time pad. Careful treatment to make sure that the one-time pad values continue to remain secret and are disposed of correctly, preventing any reuse (partially or entirely)—hence "one-time". Problems with data remanence can make it difficult to completely erase computer media. One-time pads solve few current practical problems in cryptography. High-quality ciphers are widely available and their security is not currently considered a major worry. Such ciphers are almost always easier to employ than one-time pads because the amount of key material that must be properly and securely generated, distributed and stored is far smaller. Additionally, public key cryptography overcomes the problem of key distribution. True randomness High-quality random numbers are difficult to generate. The random number generation functions in most programming language libraries are not suitable for cryptographic use. Even those generators that are suitable for normal cryptographic use, including /dev/random and many hardware random number generators, may make some use of cryptographic functions whose security has not been proven. An example of a technique for generating pure randomness is measuring radioactive emissions. In particular, one-time use is absolutely necessary. For example, if and represent two distinct plaintext messages and they are each encrypted by a common key , then the respective ciphertexts are given by: where means XOR. If an attacker were to have both ciphertexts and , then simply taking the XOR of and yields the XOR of the two plaintexts . (This is because taking the XOR of the common key with itself yields a constant bitstream of zeros.) is then the equivalent of a running key cipher. If both plaintexts are in a natural language (e.g., English or Russian), each stands a very high chance of being recovered by heuristic cryptanalysis, with possibly a few ambiguities. Of course, a longer message can only be broken for the portion that overlaps a shorter message, plus perhaps a little more by completing a word or phrase. The most famous exploit of this vulnerability occurred with the Venona project. Key distribution Because the pad, like all shared secrets, must be passed and kept secure, and the pad has to be at least as long as the message, there is often no point in using a one-time pad, as one can simply send the plain text instead of the pad (as both can be the same size and have to be sent securely). However, once a very long pad has been securely sent (e.g., a computer disk full of random data), it can be used for numerous future messages, until the sum of the messages' sizes equals the size of the pad. Quantum key distribution also proposes a solution to this problem, assuming fault-tolerant quantum computers. Distributing very long one-time pad keys is inconvenient and usually poses a significant security risk. The pad is essentially the encryption key, but unlike keys for modern ciphers, it must be extremely long and is far too difficult for humans to remember. Storage media such as thumb drives, DVD-Rs or personal digital audio players can be used to carry a very large one-time-pad from place to place in a non-suspicious way, but the need to transport the pad physically is a burden compared to the key negotiation protocols of a modern public-key cryptosystem. Such media cannot reliably be erased securely by any means short of physical destruction (e.g., incineration). A 4.7 GB DVD-R full of one-time-pad data, if shredded into particles in size, leaves over 4 megabits of data on each particle. In addition, the risk of compromise during transit (for example, a pickpocket swiping, copying and replacing the pad) is likely to be much greater in practice than the likelihood of compromise for a cipher such as AES. Finally, the effort needed to manage one-time pad key material scales very badly for large networks of communicants—the number of pads required goes up as the square of the number of users freely exchanging messages. For communication between only two persons, or a star network topology, this is less of a problem. The key material must be securely disposed of after use, to ensure the key material is never reused and to protect the messages sent. Because the key material must be transported from one endpoint to another, and persist until the message is sent or received, it can be more vulnerable to forensic recovery than the transient plaintext it protects (because of possible data remanence). Authentication As traditionally used, one-time pads provide no message authentication, the lack of which can pose a security threat in real-world systems. For example, an attacker who knows that the message contains "meet jane and me tomorrow at three thirty pm" can derive the corresponding codes of the pad directly from the two known elements (the encrypted text and the known plaintext). The attacker can then replace that text by any other text of exactly the same length, such as "three thirty meeting is cancelled, stay home". The attacker's knowledge of the one-time pad is limited to this byte length, which must be maintained for any other content of the message to remain valid. This is different from malleability where the plaintext is not necessarily known. Without knowing the message, the attacker can also flip bits in a message sent with a one-time pad, without the recipient being able to detect it. Because of their similarities, attacks on one-time pads are similar to attacks on stream ciphers. Standard techniques to prevent this, such as the use of a message authentication code can be used along with a one-time pad system to prevent such attacks, as can classical methods such as variable length padding and Russian copulation, but they all lack the perfect security the OTP itself has. Universal hashing provides a way to authenticate messages up to an arbitrary security bound (i.e., for any , a large enough hash ensures that even a computationally unbounded attacker's likelihood of successful forgery is less than p), but this uses additional random data from the pad, and some of these techniques remove the possibility of implementing the system without a computer. Common implementation errors Due to its relative simplicity of implementation, and due to its promise of perfect secrecy, one-time-pad enjoys high popularity among students learning about cryptography, especially as it is often the first algorithm to be presented and implemented during a course. Such "first" implementations often break the requirements for information theoretical security in one or more ways: The pad is generated via some algorithm, that expands one or more small values into a longer "one-time-pad". This applies equally to all algorithms, from insecure basic mathematical operations like square root decimal expansions, to complex, cryptographically secure pseudo-random random number generators (CSPRNGs). None of these implementations are one-time-pads, but stream ciphers by definition. All one-time pads must be generated by a non-algorithmic process, e.g. by a hardware random number generator. The pad is exchanged using non-information-theoretically secure methods. If the one-time-pad is encrypted with a non-information theoretically secure algorithm for delivery, the security of the cryptosystem is only as secure as the insecure delivery mechanism. A common flawed delivery mechanism for one-time-pad is a standard hybrid cryptosystem that relies on symmetric key cryptography for pad encryption, and asymmetric cryptography for symmetric key delivery. Common secure methods for one-time pad delivery are quantum key distribution, a sneakernet or courier service, or a dead drop. The implementation does not feature an unconditionally secure authentication mechanism such as a one-time MAC. The pad is reused (exploited during the Venona project, for example). The pad is not destroyed immediately after use. Uses Applicability Despite its problems, the one-time-pad retains some practical interest. In some hypothetical espionage situations, the one-time pad might be useful because encryption and decryption can be computed by hand with only pencil and paper. Nearly all other high quality ciphers are entirely impractical without computers. In the modern world, however, computers (such as those embedded in mobile phones) are so ubiquitous that possessing a computer suitable for performing conventional encryption (for example, a phone that can run concealed cryptographic software) will usually not attract suspicion. The one-time-pad is the optimum cryptosystem with theoretically perfect secrecy. The one-time-pad is one of the most practical methods of encryption where one or both parties must do all work by hand, without the aid of a computer. This made it important in the pre-computer era, and it could conceivably still be useful in situations where possession of a computer is illegal or incriminating or where trustworthy computers are not available. One-time pads are practical in situations where two parties in a secure environment must be able to depart from one another and communicate from two separate secure environments with perfect secrecy. The one-time-pad can be used in superencryption. The algorithm most commonly associated with quantum key distribution is the one-time pad. The one-time pad is mimicked by stream ciphers. Numbers stations often send messages encrypted with a one-time pad. Quantum and post-quantum cryptography A common use of the one-time pad in quantum cryptography is being used in association with quantum key distribution (QKD). QKD is typically associated with the one-time pad because it provides a way of distributing a long shared secret key securely and efficiently (assuming the existence of practical quantum networking hardware). A QKD algorithm uses properties of quantum mechanical systems to let two parties agree on a shared, uniformly random string. Algorithms for QKD, such as BB84, are also able to determine whether an adversarial party has been attempting to intercept key material, and allow for a shared secret key to be agreed upon with relatively few messages exchanged and relatively low computational overhead. At a high level, the schemes work by taking advantage of the destructive way quantum states are measured to exchange a secret and detect tampering. In the original BB84 paper, it was proven that the one-time pad, with keys distributed via QKD, is a perfectly secure encryption scheme. However, this result depends on the QKD scheme being implemented correctly in practice. Attacks on real-world QKD systems exist. For instance, many systems do not send a single photon (or other object in the desired quantum state) per bit of the key because of practical limitations, and an attacker could intercept and measure some of the photons associated with a message, gaining information about the key (i.e. leaking information about the pad), while passing along unmeasured photons corresponding to the same bit of the key. Combining QKD with a one-time pad can also loosen the requirements for key reuse. In 1982, Bennett and Brassard showed that if a QKD protocol does not detect that an adversary was trying to intercept an exchanged key, then the key can safely be reused while preserving perfect secrecy. The one-time pad is an example of post-quantum cryptography, because perfect secrecy is a definition of security that does not depend on the computational resources of the adversary. Consequently, an adversary with a quantum computer would still not be able to gain any more information about a message encrypted with a one time pad than an adversary with just a classical computer. Historical uses One-time pads have been used in special circumstances since the early 1900s. In 1923, they were employed for diplomatic communications by the German diplomatic establishment. The Weimar Republic Diplomatic Service began using the method in about 1920. The breaking of poor Soviet cryptography by the British, with messages made public for political reasons in two instances in the 1920s (ARCOS case), appear to have caused the Soviet Union to adopt one-time pads for some purposes by around 1930. KGB spies are also known to have used pencil and paper one-time pads more recently. Examples include Colonel Rudolf Abel, who was arrested and convicted in New York City in the 1950s, and the 'Krogers' (i.e., Morris and Lona Cohen), who were arrested and convicted of espionage in the United Kingdom in the early 1960s. Both were found with physical one-time pads in their possession. A number of nations have used one-time pad systems for their sensitive traffic. Leo Marks reports that the British Special Operations Executive used one-time pads in World War II to encode traffic between its offices. One-time pads for use with its overseas agents were introduced late in the war. A few British one-time tape cipher machines include the Rockex and Noreen. The German Stasi Sprach Machine was also capable of using one time tape that East Germany, Russia, and even Cuba used to send encrypted messages to their agents. The World War II voice scrambler SIGSALY was also a form of one-time system. It added noise to the signal at one end and removed it at the other end. The noise was distributed to the channel ends in the form of large shellac records that were manufactured in unique pairs. There were both starting synchronization and longer-term phase drift problems that arose and had to be solved before the system could be used. The hotline between Moscow and Washington D.C., established in 1963 after the 1962 Cuban Missile Crisis, used teleprinters protected by a commercial one-time tape system. Each country prepared the keying tapes used to encode its messages and delivered them via their embassy in the other country. A unique advantage of the OTP in this case was that neither country had to reveal more sensitive encryption methods to the other. U.S. Army Special Forces used one-time pads in Vietnam. By using Morse code with one-time pads and continuous wave radio transmission (the carrier for Morse code), they achieved both secrecy and reliable communications. Starting in 1988, the African National Congress (ANC) used disk-based one-time pads as part of a secure communication system between ANC leaders outside South Africa and in-country operatives as part of Operation Vula, a successful effort to build a resistance network inside South Africa. Random numbers on the disk were erased after use. A Belgian flight attendant acted as courier to bring in the pad disks. A regular resupply of new disks was needed as they were used up fairly quickly. One problem with the system was that it could not be used for secure data storage. Later Vula added a stream cipher keyed by book codes to solve this problem. A related notion is the one-time code—a signal, used only once; e.g., "Alpha" for "mission completed", "Bravo" for "mission failed" or even "Torch" for "Allied invasion of French Northern Africa" cannot be "decrypted" in any reasonable sense of the word. Understanding the message will require additional information, often 'depth' of repetition, or some traffic analysis. However, such strategies (though often used by real operatives, and baseball coaches) are not a cryptographic one-time pad in any significant sense. NSA At least into the 1970s, the U.S. National Security Agency (NSA) produced a variety of manual one-time pads, both general purpose and specialized, with 86,000 one-time pads produced in fiscal year 1972. Special purpose pads were produced for what the NSA called "pro forma" systems, where "the basic framework, form or format of every message text is identical or nearly so; the same kind of information, message after message, is to be presented in the same order, and only specific values, like numbers, change with each message." Examples included nuclear launch messages and radio direction finding reports (COMUS). General purpose pads were produced in several formats, a simple list of random letters (DIANA) or just numbers (CALYPSO), tiny pads for covert agents (MICKEY MOUSE), and pads designed for more rapid encoding of short messages, at the cost of lower density. One example, ORION, had 50 rows of plaintext alphabets on one side and the corresponding random cipher text letters on the other side. By placing a sheet on top of a piece of carbon paper with the carbon face up, one could circle one letter in each row on one side and the corresponding letter on the other side would be circled by the carbon paper. Thus one ORION sheet could quickly encode or decode a message up to 50 characters long. Production of ORION pads required printing both sides in exact registration, a difficult process, so NSA switched to another pad format, MEDEA, with 25 rows of paired alphabets and random characters. (See Commons:Category:NSA one-time pads for illustrations.) The NSA also built automated systems for the "centralized headquarters of CIA and Special Forces units so that they can efficiently process the many separate one-time pad messages to and from individual pad holders in the field". During World War II and into the 1950s, the U.S. made extensive use of one-time tape systems. In addition to providing confidentiality, circuits secured by one-time tape ran continually, even when there was no traffic, thus protecting against traffic analysis. In 1955, NSA produced some 1,660,000 rolls of one time tape. Each roll was 8 inches in diameter, contained 100,000 characters, lasted 166 minutes and cost $4.55 to produce. By 1972, only 55,000 rolls were produced, as one-time tapes were replaced by rotor machines such as SIGTOT, and later by electronic devices based on shift registers. The NSA describes one-time tape systems like 5-UCO and SIGTOT as being used for intelligence traffic until the introduction of the electronic cipher based KW-26 in 1957. Exploits While one-time pads provide perfect secrecy if generated and used properly, small mistakes can lead to successful cryptanalysis: In 1944–1945, the U.S. Army's Signals Intelligence Service was able to solve a one-time pad system used by the German Foreign Office for its high-level traffic, codenamed GEE. GEE was insecure because the pads were not sufficiently random—the machine used to generate the pads produced predictable output. In 1945, the US discovered that Canberra–Moscow messages were being encrypted first using a code-book and then using a one-time pad. However, the one-time pad used was the same one used by Moscow for Washington, D.C.–Moscow messages. Combined with the fact that some of the Canberra–Moscow messages included known British government documents, this allowed some of the encrypted messages to be broken. One-time pads were employed by Soviet espionage agencies for covert communications with agents and agent controllers. Analysis has shown that these pads were generated by typists using actual typewriters. This method is not truly random, as it makes the pads more likely to contain certain convenient key sequences more frequently. This proved to be generally effective because the pads were still somewhat unpredictable because the typists were not following rules, and different typists produced different patterns of pads. Without copies of the key material used, only some defect in the generation method or reuse of keys offered much hope of cryptanalysis. Beginning in the late 1940s, US and UK intelligence agencies were able to break some of the Soviet one-time pad traffic to Moscow during WWII as a result of errors made in generating and distributing the key material. One suggestion is that Moscow Centre personnel were somewhat rushed by the presence of German troops just outside Moscow in late 1941 and early 1942, and they produced more than one copy of the same key material during that period. This decades-long effort was finally codenamed VENONA (BRIDE had been an earlier name); it produced a considerable amount of information. Even so, only a small percentage of the intercepted messages were either fully or partially decrypted (a few thousand out of several hundred thousand). The one-time tape systems used by the U.S. employed electromechanical mixers to combine bits from the message and the one-time tape. These mixers radiated considerable electromagnetic energy that could be picked up by an adversary at some distance from the encryption equipment. This effect, first noticed by Bell Labs during World War II, could allow interception and recovery of the plaintext of messages being transmitted, a vulnerability code-named Tempest.
Technology
Computer security
null
22213
https://en.wikipedia.org/wiki/Operator%20%28mathematics%29
Operator (mathematics)
In mathematics, an operator is generally a mapping or function that acts on elements of a space to produce elements of another space (possibly and sometimes required to be the same space). There is no general definition of an operator, but the term is often used in place of function when the domain is a set of functions or other structured objects. Also, the domain of an operator is often difficult to characterize explicitly (for example in the case of an integral operator), and may be extended so as to act on related objects (an operator that acts on functions may act also on differential equations whose solutions are functions that satisfy the equation). (see Operator (physics) for other examples) The most basic operators are linear maps, which act on vector spaces. Linear operators refer to linear maps whose domain and range are the same space, for example from to . Such operators often preserve properties, such as continuity. For example, differentiation and indefinite integration are linear operators; operators that are built from them are called differential operators, integral operators or integro-differential operators. Operator is also used for denoting the symbol of a mathematical operation. This is related with the meaning of "operator" in computer programming (see Operator (computer programming)). Linear operators The most common kind of operators encountered are linear operators. Let and be vector spaces over some field . A mapping is linear if for all and in , and for all in . This means that a linear operator preserves vector space operations, in the sense that it does not matter whether you apply the linear operator before or after the operations of addition and scalar multiplication. In more technical words, linear operators are morphisms between vector spaces. In the finite-dimensional case linear operators can be represented by matrices in the following way. Let be a field, and and be finite-dimensional vector spaces over . Let us select a basis in and in . Then let be an arbitrary vector in (assuming Einstein convention), and be a linear operator. Then Then , with all , is the matrix form of the operator in the fixed basis . The tensor does not depend on the choice of , and if . Thus in fixed bases -by- matrices are in bijective correspondence to linear operators from to . The important concepts directly related to operators between finite-dimensional vector spaces are the ones of rank, determinant, inverse operator, and eigenspace. Linear operators also play a great role in the infinite-dimensional case. The concepts of rank and determinant cannot be extended to infinite-dimensional matrices. This is why very different techniques are employed when studying linear operators (and operators in general) in the infinite-dimensional case. The study of linear operators in the infinite-dimensional case is known as functional analysis (so called because various classes of functions form interesting examples of infinite-dimensional vector spaces). The space of sequences of real numbers, or more generally sequences of vectors in any vector space, themselves form an infinite-dimensional vector space. The most important cases are sequences of real or complex numbers, and these spaces, together with linear subspaces, are known as sequence spaces. Operators on these spaces are known as sequence transformations. Bounded linear operators over a Banach space form a Banach algebra in respect to the standard operator norm. The theory of Banach algebras develops a very general concept of spectra that elegantly generalizes the theory of eigenspaces. Bounded operators Let and be two vector spaces over the same ordered field (for example; ), and they are equipped with norms. Then a linear operator from to is called bounded if there exists such that for every in . Bounded operators form a vector space. On this vector space we can introduce a norm that is compatible with the norms of and : In case of operators from to itself it can be shown that . Any unital normed algebra with this property is called a Banach algebra. It is possible to generalize spectral theory to such algebras. C*-algebras, which are Banach algebras with some additional structure, play an important role in quantum mechanics. Examples Analysis (calculus) From the point of view of functional analysis, calculus is the study of two linear operators: the differential operator , and the Volterra operator . Fundamental analysis operators on scalar and vector fields Three operators are key to vector calculus: Grad (gradient), (with operator symbol ) assigns a vector at every point in a scalar field that points in the direction of greatest rate of change of that field and whose norm measures the absolute value of that greatest rate of change. Div (divergence), (with operator symbol ) is a vector operator that measures a vector field's divergence from or convergence towards a given point. Curl, (with operator symbol ) is a vector operator that measures a vector field's curling (winding around, rotating around) trend about a given point. As an extension of vector calculus operators to physics, engineering and tensor spaces, grad, div and curl operators also are often associated with tensor calculus as well as vector calculus. Geometry In geometry, additional structures on vector spaces are sometimes studied. Operators that map such vector spaces to themselves bijectively are very useful in these studies, they naturally form groups by composition. For example, bijective operators preserving the structure of a vector space are precisely the invertible linear operators. They form the general linear group under composition. However, they do not form a vector space under operator addition; since, for example, both the identity and −identity are invertible (bijective), but their sum, 0, is not. Operators preserving the Euclidean metric on such a space form the isometry group, and those that fix the origin form a subgroup known as the orthogonal group. Operators in the orthogonal group that also preserve the orientation of vector tuples form the special orthogonal group, or the group of rotations. Probability theory Operators are also involved in probability theory, such as expectation, variance, and covariance, which are used to name both number statistics and the operators which produce them. Indeed, every covariance is basically a dot product: Every variance is a dot product of a vector with itself, and thus is a quadratic norm; every standard deviation is a norm (square root of the quadratic norm); the corresponding cosine to this dot product is the Pearson correlation coefficient; expected value is basically an integral operator (used to measure weighted shapes in the space). Fourier series and Fourier transform The Fourier transform is useful in applied mathematics, particularly physics and signal processing. It is another integral operator; it is useful mainly because it converts a function on one (temporal) domain to a function on another (frequency) domain, in a way effectively invertible. No information is lost, as there is an inverse transform operator. In the simple case of periodic functions, this result is based on the theorem that any continuous periodic function can be represented as the sum of a series of sine waves and cosine waves: The tuple is in fact an element of an infinite-dimensional vector space , and thus Fourier series is a linear operator. When dealing with general function , the transform takes on an integral form: Laplace transform The Laplace transform is another integral operator and is involved in simplifying the process of solving differential equations. Given , it is defined by:
Mathematics
Mathematical analysis
null
22265
https://en.wikipedia.org/wiki/Ordovician
Ordovician
The Ordovician ( ) is a geologic period and system, the second of six periods of the Paleozoic Era, and the second of twelve periods of the Phanerozoic Eon. The Ordovician spans 41.6 million years from the end of the Cambrian Period Ma (million years ago) to the start of the Silurian Period Ma. The Ordovician, named after the Welsh tribe of the Ordovices, was defined by Charles Lapworth in 1879 to resolve a dispute between followers of Adam Sedgwick and Roderick Murchison, who were placing the same rock beds in North Wales in the Cambrian and Silurian systems, respectively. Lapworth recognized that the fossil fauna in the disputed strata were different from those of either the Cambrian or the Silurian systems, and placed them in a system of their own. The Ordovician received international approval in 1960 (forty years after Lapworth's death), when it was adopted as an official period of the Paleozoic Era by the International Geological Congress. Life continued to flourish during the Ordovician as it had in the earlier Cambrian Period, although the end of the period was marked by the Ordovician–Silurian extinction events. Invertebrates, namely molluscs and arthropods, dominated the oceans, with members of the latter group probably starting their establishment on land during this time, becoming fully established by the Devonian. The first land plants are known from this period. The Great Ordovician Biodiversification Event considerably increased the diversity of life. Fish, the world's first true vertebrates, continued to evolve, and those with jaws may have first appeared late in the period. About 100 times as many meteorites struck the Earth per year during the Ordovician compared with today in a period known as the Ordovician meteor event. It has been theorized that this increase in impacts may originate from a ring system that formed around Earth at the time. Subdivisions In 2008, the ICS erected a formal international system of subdivisions for the Ordovician Period and System. Pre-existing Baltoscandic, British, Siberian, North American, Australian, Chinese, Mediterranean and North-Gondwanan regional stratigraphic schemes are also used locally. Global/regional correlation British stages and ages The Ordovician Period in Britain was traditionally broken into Early (Tremadocian and Arenig), Middle (Llanvirn (subdivided into Abereiddian and Llandeilian) and Llandeilo) and Late (Caradoc and Ashgill) epochs. The corresponding rocks of the Ordovician System are referred to as coming from the Lower, Middle, or Upper part of the column. The Tremadoc corresponds to the ICS's Tremadocian. The Arenig corresponds to the Floian, all of the Dapingian and the early Darriwilian. The Llanvirn corresponds to the late Darriwilian. The Caradoc covers the Sandbian and the first half of the Katian. The Ashgill represents the second half of the Katian, plus the Hirnantian. Ashgill The Ashgill Epoch, the last epoch of the British Ordovician, is made of four ages: the Hirnantian Age, the Rawtheyan Age, the Cautleyan Age, and the Pusgillian Age. These ages make up the time period from c. 450 Ma to c. 443 Ma. The Rawtheyan, the second last of the Ashgill ages, was from c. 449 Ma to c. 445 Ma. It is in the Katian Age of the ICS's Geologic Time Scale. Paleogeography and tectonics During the Ordovician, the southern continents were assembled into Gondwana, which reached from north of the equator to the South Pole. The Panthalassic Ocean, centered in the northern hemisphere, covered over half the globe. At the start of the period, the continents of Laurentia (in present-day North America), Siberia, and Baltica (present-day northern Europe) were separated from Gondwana by over of ocean. These smaller continents were also sufficiently widely separated from each other to develop distinct communities of benthic organisms. The small continent of Avalonia had just rifted from Gondwana and began to move north towards Baltica and Laurentia, opening the Rheic Ocean between Gondwana and Avalonia. Avalonia collided with Baltica towards the end of Ordovician. Other geographic features of the Ordovician world included the Tornquist Sea, which separated Avalonia from Baltica; the Aegir Ocean, which separated Baltica from Siberia; and an oceanic area between Siberia, Baltica, and Gondwana which expanded to become the Paleoasian Ocean in Carboniferous time. The Mongol-Okhotsk Ocean formed a deep embayment between Siberia and the Central Mongolian terranes. Most of the terranes of central Asia were part of an equatorial archipelago whose geometry is poorly constrained by the available evidence. The period was one of extensive, widespread tectonism and volcanism. However, orogenesis (mountain-building) was not primarily due to continent-continent collisions. Instead, mountains arose along active continental margins during accretion of arc terranes or ribbon microcontinents. Accretion of new crust was limited to the Iapetus margin of Laurentia; elsewhere, the pattern was of rifting in back-arc basins followed by remerger. This reflected episodic switching from extension to compression. The initiation of new subduction reflected a global reorganization of tectonic plates centered on the amalgamation of Gondwana. The Taconic orogeny, a major mountain-building episode, was well under way in Cambrian times. This continued into the Ordovician, when at least two volcanic island arcs collided with Laurentia to form the Appalachian Mountains. Laurentia was otherwise tectonically stable. An island arc accreted to South China during the period, while subduction along north China (Sulinheer) resulted in the emplacement of ophiolites. The ash fall of the Millburg/Big Bentonite bed, at about 454 Ma, was the largest in the last 590 million years. This had a dense rock equivalent volume of as much as . Remarkably, this appears to have had little impact on life. There was vigorous tectonic activity along northwest margin of Gondwana during the Floian, 478 Ma, recorded in the Central Iberian Zone of Spain. The activity reached as far as Turkey by the end of Ordovician. The opposite margin of Gondwana, in Australia, faced a set of island arcs. The accretion of these arcs to the eastern margin of Gondwana was responsible for the Benambran Orogeny of eastern Australia. Subduction also took place along what is now Argentina (Famatinian Orogeny) at 450 Ma. This involved significant back arc rifting. The interior of Gondwana was tectonically quiet until the Triassic. Towards the end of the period, Gondwana began to drift across the South Pole. This contributed to the Hibernian glaciation and the associated extinction event. Ordovician meteor event The Ordovician meteor event is a proposed shower of meteors that occurred during the Middle Ordovician Epoch, about 467.5 ± 0.28 million years ago, due to the break-up of the L chondrite parent body. It is not associated with any major extinction event. A 2024 study found that craters from this event cluster in a distinct band around the Earth, and that the breakup of the parent body may have formed a ring system for a period of about 40 million years, with frequent falling debris causing these craters. Geochemistry The Ordovician was a time of calcite sea geochemistry in which low-magnesium calcite was the primary inorganic marine precipitate of calcium carbonate. Carbonate hardgrounds were thus very common, along with calcitic ooids, calcitic cements, and invertebrate faunas with dominantly calcitic skeletons. Biogenic aragonite, like that composing the shells of most molluscs, dissolved rapidly on the sea floor after death. Unlike Cambrian times, when calcite production was dominated by microbial and non-biological processes, animals (and macroalgae) became a dominant source of calcareous material in Ordovician deposits. Climate and sea level The Early Ordovician climate was very hot, with intense greenhouse conditions and sea surface temperatures comparable to those during the Early Eocene Climatic Optimum. Carbon dioxide levels were very high at the Ordovician period's beginning. By the late Early Ordovician, the Earth cooled, giving way to a more temperate climate in the Middle Ordovician, with the Earth likely entering the Early Palaeozoic Ice Age during the Sandbian, and possibly as early as the Darriwilian or even the Floian. The Dapingian and Sandbian saw major humidification events evidenced by trace metal concentrations in Baltoscandia from this time. Evidence suggests that global temperatures rose briefly in the early Katian (Boda Event), depositing bioherms and radiating fauna across Europe. The early Katian also witnessed yet another humidification event. Further cooling during the Hirnantian, at the end of the Ordovician, led to the Late Ordovician glaciation. The Ordovician saw the highest sea levels of the Paleozoic, and the low relief of the continents led to many shelf deposits being formed under hundreds of metres of water. The sea level rose more or less continuously throughout the Early Ordovician, leveling off somewhat during the middle of the period. Locally, some regressions occurred, but the sea level rise continued in the beginning of the Late Ordovician. Sea levels fell steadily due to the cooling temperatures for about 3 million years leading up to the Hirnantian glaciation. During this icy stage, sea level seems to have risen and dropped somewhat. Despite much study, the details remain unresolved. In particular, some researches interpret the fluctuations in sea level as pre-Hibernian glaciation, but sedimentary evidence of glaciation is lacking until the end of the period. There is evidence of glaciers during the Hirnantian on the land we now know as Africa and South America, which were near the South Pole at the time, facilitating the formation of the ice caps of the Hirnantian glaciation. As with North America and Europe, Gondwana was largely covered with shallow seas during the Ordovician. Shallow clear waters over continental shelves encouraged the growth of organisms that deposit calcium carbonates in their shells and hard parts. The Panthalassic Ocean covered much of the Northern Hemisphere, and other minor oceans included Proto-Tethys, Paleo-Tethys, Khanty Ocean, which was closed off by the Late Ordovician, Iapetus Ocean, and the new Rheic Ocean. Life For most of the Late Ordovician life continued to flourish, but at and near the end of the period there were mass-extinction events that seriously affected conodonts and planktonic forms like graptolites. The trilobites Agnostida and Ptychopariida completely died out, and the Asaphida were much reduced. Brachiopods, bryozoans and echinoderms were also heavily affected, and the endocerid cephalopods died out completely, except for possible rare Silurian forms. The Ordovician–Silurian extinction events may have been caused by an ice age that occurred at the end of the Ordovician Period, due to the expansion of the first terrestrial plants, as the end of the Late Ordovician was one of the coldest times in the last 600 million years of Earth's history. Fauna On the whole, the fauna that emerged in the Ordovician were the template for the remainder of the Palaeozoic. The fauna was dominated by tiered communities of suspension feeders, mainly with short food chains. The ecological system reached a new grade of complexity far beyond that of the Cambrian fauna, which has persisted until the present day. Though less famous than the Cambrian explosion, the Ordovician radiation (also known as the Great Ordovician Biodiversification Event) was no less remarkable; marine faunal genera increased fourfold, resulting in 12% of all known Phanerozoic marine fauna. Several animals also went through a miniaturization process, becoming much smaller than their Cambrian counterparts. Another change in the fauna was the strong increase in filter-feeding organisms. The trilobite, inarticulate brachiopod, archaeocyathid, and eocrinoid faunas of the Cambrian were succeeded by those that dominated the rest of the Paleozoic, such as articulate brachiopods, cephalopods, and crinoids. Articulate brachiopods, in particular, largely replaced trilobites in shelf communities. Their success epitomizes the greatly increased diversity of carbonate shell-secreting organisms in the Ordovician compared to the Cambrian. Ordovician geography had its effect on the diversity of fauna; Ordovician invertebrates displayed a very high degree of provincialism. The widely separated continents of Laurentia and Baltica, then positioned close to the tropics and boasting many shallow seas rich in life, developed distinct trilobite faunas from the trilobite fauna of Gondwana, and Gondwana developed distinct fauna in its tropical and temperature zones. The Tien Shan terrane maintained a biogeographic affinity with Gondwana, and the Alborz margin of Gondwana was linked biogeographically to South China. Southeast Asia's fauna also maintained strong affinities to Gondwana's. North China was biogeographically connected to Laurentia and the Argentinian margin of Gondwana. A Celtic biogeographic province also existed, separate from the Laurentian and Baltican ones. However, tropical articulate brachiopods had a more cosmopolitan distribution, with less diversity on different continents. During the Middle Ordovician, beta diversity began a significant decline as marine taxa began to disperse widely across space. Faunas become less provincial later in the Ordovician, partly due to the narrowing of the Iapetus Ocean, though they were still distinguishable into the late Ordovician. Trilobites in particular were rich and diverse, and experienced rapid diversification in many regions. Trilobites in the Ordovician were very different from their predecessors in the Cambrian. Many trilobites developed bizarre spines and nodules to defend against predators such as primitive eurypterids and nautiloids while other trilobites such as Aeglina prisca evolved to become swimming forms. Some trilobites even developed shovel-like snouts for ploughing through muddy sea bottoms. Another unusual clade of trilobites known as the trinucleids developed a broad pitted margin around their head shields. Some trilobites such as Asaphus kowalewski evolved long eyestalks to assist in detecting predators whereas other trilobite eyes in contrast disappeared completely. Molecular clock analyses suggest that early arachnids started living on land by the end of the Ordovician. Although solitary corals date back to at least the Cambrian, reef-forming corals appeared in the early Ordovician, including the earliest known octocorals, corresponding to an increase in the stability of carbonate and thus a new abundance of calcifying animals. Brachiopods surged in diversity, adapting to almost every type of marine environment. Even after GOBE, there is evidence suggesting that Ordovician brachiopods maintained elevated rates of speciation. Molluscs, which appeared during the Cambrian or even the Ediacaran, became common and varied, especially bivalves, gastropods, and nautiloid cephalopods. Cephalopods diversified from shallow marine tropical environments to dominate almost all marine environments. Graptolites, which evolved in the preceding Cambrian period, thrived in the oceans. This includes the distinctive Nemagraptus gracilis graptolite fauna, which was distributed widely during peak sea levels in the Sandbian. Some new cystoids and crinoids appeared. It was long thought that the first true vertebrates (fish — Ostracoderms) appeared in the Ordovician, but recent discoveries in China reveal that they probably originated in the Early Cambrian. The first gnathostome (jawed fish) may have appeared in the Late Ordovician epoch. Chitinozoans, which first appeared late in the Wuliuan, exploded in diversity during the Tremadocian, quickly becoming globally widespread. Several groups of endobiotic symbionts appeared in the Ordovician. In the Early Ordovician, trilobites were joined by many new types of organisms, including tabulate corals, strophomenid, rhynchonellid, and many new orthid brachiopods, bryozoans, planktonic graptolites and conodonts, and many types of molluscs and echinoderms, including the ophiuroids ("brittle stars") and the first sea stars. Nevertheless, the arthropods remained abundant; all the Late Cambrian orders continued, and were joined by the new group Phacopida. The first evidence of land plants also appeared (see evolutionary history of life). In the Middle Ordovician, the trilobite-dominated Early Ordovician communities were replaced by generally more mixed ecosystems, in which brachiopods, bryozoans, molluscs, cornulitids, tentaculitids and echinoderms all flourished, tabulate corals diversified and the first rugose corals appeared. The planktonic graptolites remained diverse, with the Diplograptina making their appearance. One of the earliest known armoured agnathan ("ostracoderm") vertebrates, Arandaspis, dates from the Middle Ordovician. During the Middle Ordovician there was a large increase in the intensity and diversity of bioeroding organisms. This is known as the Ordovician Bioerosion Revolution. It is marked by a sudden abundance of hard substrate trace fossils such as Trypanites, Palaeosabella, Petroxestes and Osprioneides. Bioerosion became an important process, particularly in the thick calcitic skeletons of corals, bryozoans and brachiopods, and on the extensive carbonate hardgrounds that appear in abundance at this time. Flora Green algae were common in the Late Cambrian (perhaps earlier) and in the Ordovician. Terrestrial plants probably evolved from green algae, first appearing as tiny non-vascular forms resembling liverworts, in the middle to late Ordovician. Fossil spores found in Ordovician sedimentary rock are typical of bryophytes. Among the first land fungi may have been arbuscular mycorrhiza fungi (Glomerales), playing a crucial role in facilitating the colonization of land by plants through mycorrhizal symbiosis, which makes mineral nutrients available to plant cells; such fossilized fungal hyphae and spores from the Ordovician of Wisconsin have been found with an age of about 460 million years ago, a time when the land flora most likely only consisted of plants similar to non-vascular bryophytes. Microbiota Though stromatolites had declined from their peak in the Proterozoic, they continued to exist in localised settings. End of the period The Ordovician came to a close in a series of extinction events that, taken together, comprise the second largest of the five major extinction events in Earth's history in terms of percentage of genera that became extinct. The only larger one was the Permian–Triassic extinction event. The extinctions occurred approximately 447–444 million years ago and mark the boundary between the Ordovician and the following Silurian Period. At that time all complex multicellular organisms lived in the sea, and about 49% of genera of fauna disappeared forever; brachiopods and bryozoans were greatly reduced, along with many trilobite, conodont and graptolite families. The most commonly accepted theory is that these events were triggered by the onset of cold conditions in the late Katian, followed by an ice age, in the Hirnantian faunal stage, that ended the long, stable greenhouse conditions typical of the Ordovician. The ice age was possibly not long-lasting. Oxygen isotopes in fossil brachiopods show its duration may have been only 0.5 to 1.5 million years. Other researchers (Page et al.) estimate more temperate conditions did not return until the late Silurian. The late Ordovician glaciation event was preceded by a fall in atmospheric carbon dioxide (from 7000 ppm to 4400 ppm). The dip may have been caused by a burst of volcanic activity that deposited new silicate rocks, which draw CO2 out of the air as they erode. Another possibility is that bryophytes and lichens, which colonized land in the middle to late Ordovician, may have increased weathering enough to draw down levels. The drop in selectively affected the shallow seas where most organisms lived. It has also been suggested that shielding of the sun's rays from the proposed Ordovician ring system, which also caused the Ordovician meteor event, may have also led to the glaciation. As the southern supercontinent Gondwana drifted over the South Pole, ice caps formed on it, which have been detected in Upper Ordovician rock strata of North Africa and then-adjacent northeastern South America, which were south-polar locations at the time. As glaciers grew, the sea level dropped, and the vast shallow intra-continental Ordovician seas withdrew, which eliminated many ecological niches. When they returned, they carried diminished founder populations that lacked many whole families of organisms. They then withdrew again with the next pulse of glaciation, eliminating biological diversity with each change. Species limited to a single epicontinental sea on a given landmass were severely affected. Tropical lifeforms were hit particularly hard in the first wave of extinction, while cool-water species were hit worst in the second pulse. Those species able to adapt to the changing conditions survived to fill the ecological niches left by the extinctions. For example, there is evidence the oceans became more deeply oxygenated during the glaciation, allowing unusual benthic organisms (Hirnantian fauna) to colonize the depths. These organisms were cosmopolitan in distribution and present at most latitudes. At the end of the second event, melting glaciers caused the sea level to rise and stabilise once more. The rebound of life's diversity with the permanent re-flooding of continental shelves at the onset of the Silurian saw increased biodiversity within the surviving Orders. Recovery was characterized by an unusual number of "Lazarus taxa", disappearing during the extinction and reappearing well into the Silurian, which suggests that the taxa survived in small numbers in refugia. An alternate extinction hypothesis suggested that a ten-second gamma-ray burst could have destroyed the ozone layer and exposed terrestrial and marine surface-dwelling life to deadly ultraviolet radiation and initiated global cooling. Recent work considering the sequence stratigraphy of the Late Ordovician argues that the mass extinction was a single protracted episode lasting several hundred thousand years, with abrupt changes in water depth and sedimentation rate producing two pulses of last occurrences of species.
Physical sciences
Geological timescale
Earth science
22266
https://en.wikipedia.org/wiki/Oregano
Oregano
Oregano (, ; Origanum vulgare) is a species of flowering plant in the mint family, Lamiaceae. It was native to the Mediterranean region, but widely naturalised elsewhere in the temperate Northern Hemisphere. Oregano is a woody perennial plant, growing to tall, with opposite leaves long. The flowers which can be white, pink or light purple, are long, and produced in erect spikes in summer. It is sometimes called wild marjoram, while its close relative O. majorana is known as sweet marjoram. Both are widely used as culinary herbs, especially in Turkish, Greek, Spanish, Italian, Latin, and French cuisine. Oregano is also an ornamental plant, with numerous cultivars bred for varying leaf colour, flower colour and habit. Etymology The English word "oregano" is a borrowing of the Spanish , which derives from the Latin , which itself comes from Classical Greek (orī́ganon). The ultimate origin is disputed; some claim it is a compound Greek term that consists of (óros) meaning "mountain", and (gános) meaning "joy", thus, "joy of the mountain" while The Oxford English Dictionary states it is "probably a loanword [as] the plant comes from Africa", and that "joy of the mountain" is a false etymology. Description Oregano is a perennial, although it is grown as an annual in colder climates, as it often does not survive the winter. It grows to tall and wide. The leaves are spade-shaped and olive-green. The flowers are purple, pink or white, long and grouped in clusters. Oregano is related to the herb marjoram, sometimes being referred to as wild marjoram. Chemistry Oregano contains polyphenols, including numerous flavones. The essential oil of oregano is composed primarily of monoterpenoids and monoterpenes, with the relative concentration of each compound varying widely across geographic origin and other factors. Over 60 different compounds have been identified, with the primary ones being carvacrol and thymol ranging to over 80%, while lesser abundant compounds include , , caryophyllene, spathulenol, germacrene D, β-fenchyl alcohol and . Drying of the plant material affects both quantity and distribution of volatile compounds, with methods using higher heat and longer drying times having greater negative impact. A sample of fresh whole plant material found to contain 33 g/kg dry weight (3.1 g/kg wet) decreased to below a third after warm-air convection drying. Much higher concentrations of volatile compounds are achieved towards the end of the growing season. Taxonomy Many subspecies and strains of oregano have been developed by humans over centuries for their unique flavours or other characteristics. Tastes range from spicy or astringent to more complicated and sweet. Simple oregano sold in garden stores as O. vulgare may have a bland taste and larger, less-dense leaves, and is not considered the best for culinary use, with a taste less remarkable and pungent. It can pollinate other more sophisticated strains, but the offspring are rarely better in quality. The related species Origanum onites (Greece, Turkey) and O. syriacum (West Asia) have similar flavours. A closely related plant is marjoram from Turkey, which differs significantly in taste because phenolic compounds are missing from its essential oil. Some varieties show a flavour intermediate between oregano and marjoram. Subspecies Accepted subspecies: O. v. subsp. glandulosum (Desf.) Ietsw. – Tunisia, Algeria O. v. subsp. gracile (K.Koch) Ietsw. (= O. tyttanthum) has glossy green leaves and pink flowers. It grows well in pots or containers, and is more often grown for added ornamental value than other oregano. The flavor is pungent and spicy. – Central Asia, Iran, India, Turkey, Afghanistan, Pakistan. O. v. subsp. hirtum (Link) Ietsw. – (Italian oregano, Greek oregano) is a common source of cultivars with a different aroma from those of O. v. gracile. Growth is vigorous and very hardy, with darker green, slightly hairy foliage. Generally, it is considered the best all-purpose culinary subspecies. – Greece, Balkans, Turkey, Cyprus O. v. subsp. virens (Hoffmanns. & Link) Ietsw. – Iberian Peninsula, Macaronesia, Morocco O. v. subsp. viridulum (Martrin-Donos) Nyman – widespread from Corsica to Nepal O. v. subsp. vulgare – widespread across Europe + Asia from Ireland to China; naturalized in North America + Venezuela Cultivars Example cultivars of oregano include: 'Aureum' – golden foliage (greener if grown in shade), mild taste: It has gained the Royal Horticultural Society's Award of Garden Merit 'Greek Kaliteri' – O. v. subsp. hirtum strains/landraces, small, hardy, dark, compact, thick, silvery-haired leaves, usually with purple undersides, excellent reputation for flavor and pungency, as well as medicinal uses, strong, archetypal oregano flavor (Greek kaliteri: the best) 'Hot & Spicy' – O. v. subsp. hirtum strain 'Nana' – dwarf cultivar Cultivars traded as Italian, Sicilian, etc. are usually hardy sweet marjoram (O. × majoricum), a hybrid between the southern Adriatic O. v. subsp. hirtum and sweet marjoram (O. majorana). They have a reputation for sweet and spicy tones, with little bitterness, and are prized for their flavor and compatibility with various recipes and sauces. Cultivation Oregano is planted in early spring, the plants being spaced apart in fairly dry soil, with full sun. It will grow in a pH range between 6.0 (mildly acidic) and 9.0 (strongly alkaline), with a preferred range between 6.0 and 8.0. It prefers a hot, relatively dry climate, but does well in other environments. Uses Culinary Oregano is a culinary herb, used for the flavour of its leaves, which can be more intense when dried than fresh. It has an earthy, warm, and slightly bitter taste, which can vary in intensity. Good-quality oregano may be strong enough to almost numb the tongue, but cultivars adapted to colder climates may have a lesser flavour. Factors such as climate, season, and soil composition may affect the aromatic oils present, and this effect may be greater than the differences between the various species of plants. Among the chemical compounds contributing to the flavour are carvacrol, thymol, limonene, pinene, ocimene, and caryophyllene. Oregano is the staple herb of Italian cuisine, most frequently used with roasted, fried, or grilled vegetables, meat, and fish. Oregano combines well with spicy foods popular in Southern Italy. It is less commonly used in the north of the country, as marjoram is generally preferred. Its popularity in the U.S. began when soldiers returning from World War II brought back with them a taste for the "pizza herb", which had probably been eaten in Southern Italy for centuries. Oregano is widely used in cuisines of the Mediterranean Basin and Latin America, especially in Mexican cuisine and Argentine cuisine. In Turkish cuisine, oregano is mostly used for flavouring meat, especially mutton and lamb. In barbecue and kebab restaurants, it can be usually found as a condiment, together with paprika, salt, and pepper. During the summer, generous amounts of dried oregano are often added as a topping to a tomato and cucumber salad in Portugal, but it can be used to season meat and fish dishes as well. In Spain, apart from seasoning, it is used in preparations of a variety of traditional dishes such as morcilla (Iberian pig blood sausage) and adobo sauce for fish and meat. The dried and ground leaves are most often used in Greece to add flavour to Greek salad, and is usually added to the lemon-olive oil sauce that accompanies fish or meat grills and casseroles. In Albania, dried oregano is often used to make herbal tea which is especially popular in the northern part of Albania. Oregano oil Oregano oil has been used in folk medicine over centuries. Oregano essential oil is extracted from the leaves of the oregano plant. Although oregano or its oil may be used as a dietary supplement, there is no clinical evidence to indicate that either has any effect on human health. In 2014, the U.S. Food and Drug Administration (FDA) warned a Utah company, Young Living, that its herbal products, including oregano essential oil, were being promoted to have numerous unproven anti-disease effects, and so were being sold as unauthorized misbranded drugs subject to seizure and federal penalties. Similar FDA warning letters for false advertising and unproven health claims about oregano essential oil products were published in 2017 and 2018. Other plants called "oregano" Coleus amboinicus, known as Cuban oregano, ('pennyroyal oregano'), ('French oregano'), Mexican mint, Mexican thyme, and many other names, is also of the mint family (Lamiaceae). It has large and somewhat succulent leaves. Common throughout the tropics, including Latin America, Africa, and Southeast Asia, it is probably of eastern-hemisphere origin. Lippia graveolens, Mexican oregano, known in Spanish as ('wild oregano'), is not in the mint family, but in the related vervain family (Verbenaceae). The flavor of Mexican oregano has a stronger savory component instead of the piney hint of rosemary-like flavor in true oregano, and its citrus accent might be more aromatic than in oregano. It is becoming more commonly sold outside of Mexico, especially in the southeastern United States. It is sometimes used as a substitute for epazote leaves. Hedeoma patens, known in Spanish as ('small oregano'), is also among the Lamiaceae. It is used as an herb in the Mexican states of Chihuahua and Coahuila. Poliomintha longiflora, common names: Mexican oregano and rosemary mint, is native to Mexico and also grown and used in the United States.
Biology and health sciences
Herbs and spices
Plants
22286
https://en.wikipedia.org/wiki/Oligocene
Oligocene
The Oligocene ( ) is a geologic epoch of the Paleogene Period that extends from about 33.9 million to 23 million years before the present ( to ). As with other older geologic periods, the rock beds that define the epoch are well identified but the exact dates of the start and end of the epoch are slightly uncertain. The name Oligocene was coined in 1854 by the German paleontologist Heinrich Ernst Beyrich from his studies of marine beds in Belgium and Germany. The name comes from Ancient Greek (olígos) 'few' and (kainós) 'new', and refers to the sparsity of extant forms of molluscs. The Oligocene is preceded by the Eocene Epoch and is followed by the Miocene Epoch. The Oligocene is the third and final epoch of the Paleogene Period. The Oligocene is often considered an important time of transition, a link between the archaic world of the tropical Eocene and the more modern ecosystems of the Miocene. Major changes during the Oligocene included a global expansion of grasslands, and a regression of tropical broad leaf forests to the equatorial belt. The start of the Oligocene is marked by a notable extinction event called the Grande Coupure; it featured the replacement of European fauna with Asian fauna, except for the endemic rodent and marsupial families. By contrast, the Oligocene–Miocene boundary is not set at an easily identified worldwide event but rather at regional boundaries between the warmer late Oligocene and the relatively cooler Miocene. Boundaries and subdivisions The lower boundary of the Oligocene (its Global Boundary Stratotype Section and Point or GSSP) is placed at the last appearance of the foraminiferan genus Hantkenina in a quarry at Massignano, Italy. However, this GSSP has been criticized as excluding the uppermost part of the type Eocene Priabonian Stage and because it is slightly earlier than important climate shifts that form natural markers for the boundary, such as the global oxygen isotope shift marking the expansion of Antarctic glaciation (the Oi1 event). The upper boundary of the Oligocene is defined by its GSSP at Carrosio, Italy, which coincides with the first appearance of the foraminiferan Paragloborotalia kugleri and with the base of magnetic polarity chronozone C6Cn.2n. Oligocene faunal stages from youngest to oldest are: Tectonics and paleogeography During the Oligocene Epoch, the continents continued to drift toward their present positions. Antarctica became more isolated as deep ocean channels were established between Antarctica and Australia and South America. Australia had been very slowly rifting away from West Antarctica since the Jurassic, but the exact timing of the establishment of ocean channels between the two continents remains uncertain. However, one estimate is that a deep channel was in place between the two continents by the end of the early Oligocene. The timing of the formation of the Drake Passage between South America and Antarctica is also uncertain, with estimates ranging from 49 to 17 mya (early Eocene to Miocene), but oceanic circulation through the Drake Passage may also have been in place by the end of the early Oligocene. This may have been interrupted by a temporary constriction of the Drake Passage from sometime in the middle to late Oligocene (29 to 22 mya) to the middle Miocene (15 mya). The reorganization of the oceanic tectonic plates of the northeastern Pacific, which had begun in the Paleocene, culminated with the arrival of the Murray and Mendocino Fracture Zones at the North American subduction zone in the Oligocene. This initiated strike-slip movement along the San Andreas Fault and extensional tectonics in the Basin and Range province, ended volcanism south of the Cascades, and produced clockwise rotation of many western North American terranes. The Rocky Mountains were at their peak. A new volcanic arc was established in western North America, far inland from the coast, reaching from central Mexico through the Mogollon-Datil volcanic field to the San Juan volcanic field, then through Utah and Nevada to the ancestral Northern Cascades. Huge ash deposits from these volcanoes created the White River and Arikaree Groups of the High Plains, with their excellent fossil beds. Between 31 and 26 mya, the Ethiopia-Yemen Continental Flood Basalts were emplaced by the East African large igneous province, which also initiated rifting along the Red Sea and Gulf of Aden. The Alps were rapidly rising in Europe as the African plate continued to push north into the Eurasian plate, isolating the remnants of the Tethys Sea. Sea levels were lower in the Oligocene than in the early Eocene, exposing large coastal plains in Europe and the Gulf Coast and Atlantic Coast of North America. The Obik Sea, which had separated Europe from Asia, retreated early in the Oligocene, creating a persistent land connection between the continents. The Paratethys Sea stretched from what is now the Balkan Peninsula across Central Asia to the Tian Shan region of what is now Xinjiang. There appears to have been a land bridge in the early Oligocene between North America and Europe, since the faunas of the two regions are very similar. However, towards the end of the Oligocene, there was a brief marine incursion in Europe. The rise of the Himalayas during the Oligocene remains poorly understood. One recent hypothesis is that a separate microcontinent collided with south Asia in the early Eocene, and India itself did not collide with south Asia until the end of the Oligocene. The Tibetan Plateau may have reached nearly its present elevation by the late Oligocene. The Andes first became a major mountain chain in the Oligocene, as subduction became more direct into the coastline. Climate Climate during the Oligocene reflected a general cooling trend following the Early Eocene Climatic Optimum. This transformed the Earth's climate from a greenhouse to an icehouse climate. Eocene-Oligocene transition and Oi1 event The Eocene-Oligocene transition was a major cooling event and reorganization of the biosphere, being part of a broader trend of global cooling lasting from the Bartonian to the Rupelian. The transition is marked by the Oi1 event, an oxygen isotope excursion occurring approximately 33.55 million years ago, during which oxygen isotope ratios decreased by 1.3. About 0.3–0.4 of this is estimated to be due to major expansion of Antarctic ice sheets. The remaining 0.9 to 1.0 was due to about of global cooling. The transition likely took place in three closely spaced steps over the period from 33.8 to 33.5 mya. By the end of the transition, sea levels had dropped by , and ice sheets were 25% greater in extent than in the modern world. The effects of the transition can be seen in the geological record at many locations around the world. Ice volumes rose as temperature and sea levels dropped. Playa lakes of the Tibetan Plateau disappeared at the transition, pointing to cooling and aridification of central Asia. Pollen and spore counts in marine sediments of the Norwegian-Greenland Sea indicate a drop in winter temperatures at high latitudes of about just prior to the Oi1 event. Borehole dating from the Southeast Faroes drift indicates that deep-ocean circulation from the Arctic Ocean to the North Atlantic Ocean began in the early Oligocene. The best terrestrial record of Oligocene climate comes from North America, where temperatures dropped by in the earliest Oligocene. This change is seen from Alaska to the Gulf Coast. Upper Eocene paleosols reflect annual precipitation of over a meter of rain, but early Oligocene precipitation was less than half this. In central North America, the cooling was by 8.2 ± 3.1 °C over a period of 400,000 years, though there is little indication of significant increase in aridity during this interval. Ice-rafted debris in the Norwegian-Greenland Sea indicated that glaciers had appeared in Greenland by the start of the Oligocene. Continental ice sheets in Antarctica reached sea level during the transition. Glacially rafted debris of early Oligocene age in the Weddell Sea and Kerguelen Plateau, in combination with Oi1 isotope shift, provides unambiguous evidence of a continental ice sheet on Antarctica by the early Oligocene. The causes of the Eocene-Oligocene transition are not yet fully understood. The timing is wrong for this to be caused either by known impact events or by the volcanic activity on the Ethiopean Plateau. Two other possible drivers of climate change, not mutually exclusive, have been proposed. The first is thermal isolation of the continent of Antarctica by development of the Antarctic Circumpolar Current. Deep sea cores from south of New Zealand suggest that cold deep-sea currents were present by the early Oligocene. However, the timing of this event remains controversial. The other possibility, for which there is considerable evidence, is a drop in atmospheric carbon dioxide levels (pCO2) during the transition. The pCO2 is estimated to have dropped just before the transition, to 760 ppm at the peak of ice sheet growth, then rebounded slightly before resuming a more gradual fall. Climate modeling suggests that glaciation of Antarctica took place only when pCO2 dropped below a critical threshold value. Brachiopod oxygen isotope ratios from New Zealand suggest that a proto-Subtropical Convergence developed during the Early Oligocene, with northern New Zealand being subtropical and southern and eastern New Zealand being cooled by cold, subantarctic water. Middle Oligocene climate and the Oi2 event Oligocene climate following the Eocene-Oligocene event is poorly known. There were several pulses of glaciation in middle Oligocene, about the time of the Oi2 oxygen isotope shift. This led to the largest drop of sea level in past 100 million years, by about . This is reflected in a mid-Oligocene incision of continental shelves and unconformities in marine rocks around the world. Some evidence suggests that the climate remained warm at high latitudes even as ice sheets experienced cyclical growth and retreat in response to orbital forcing and other climate drivers. Other evidence indicates significant cooling at high latitudes. Part of the difficulty may be that there were strong regional variations in the response to climate shifts. Evidence of a relatively warm Oligocene suggests an enigmatic climate state, neither hothouse nor icehouse. Late Oligocene warming The late Oligocene (26.5 to 24 mya) likely saw a warming trend in spite of low pCO2 levels, though this appears to vary by region. However, Antarctica remained heavily glaciated during this warming period. The late Oligocene warming is discernible in pollen counts from the Tibetan Plateau, which also show that the South Asian Monsoon had already developed by the late Oligocene. Around 25.8 Ma, the South Asian Monsoon underwent an episode of major intensification brought on by the uplift of the Tibetan Plateau. A deep 400,000-year glaciated Oligocene-Miocene boundary event is recorded at McMurdo Sound and King George Island. Biosphere The early Eocene climate was very warm, with crocodilians and temperate plants thriving north of the Arctic Circle. The cooling trend that began in the middle Eocene continued into the Oligocene, bringing both poles well below freezing for the first time in the Phanerozoic. The cooling climate, together with the opening of some land bridges and the closing of others, led to a profound reorganization of the biosphere and loss of taxonomic diversity. Land animals and marine organisms reached a Phanerozoic low in diversity by the late Oligocene, and the temperate forests and jungles of the Eocene were replaced by forest and scrubland. The closing of the Tethys Seaway destroyed its tropical biota. Flora The Oi1 event of the Eocene-Oligocene transition covered the continent of Antarctica with ice sheets, leaving Nothofagus and mosses and ferns clinging to life around the periphery of Antarctica in tundra conditions. Angiosperms continued their expansion throughout the world as tropical and sub-tropical forests were replaced by temperate deciduous forests. Open plains and deserts became more common and grasses expanded from their water-bank habitat in the Eocene moving out into open tracts. The decline in pCO2 favored C4 photosynthesis, which is found only in angiosperms and is particularly characteristic of grasses. However, even at the end of the period, grass was not quite common enough for modern savannas. In North America, much of the dense forest was replaced by patchy scrubland with riparian forests. Subtropical species dominated with cashews and lychee trees present, and temperate woody plants such as roses, beeches, and pines were common. The legumes spread, while sedges and ferns continued their ascent. In Europe, floral assemblages became increasingly affected by strengthening seasonality as it related to wildfire activity. In Pakistan, the flora consisted mainly of dry but dense forests. In northern China, there was a progressive ascendance of open, grassy environments. The Ha Long megafossil flora from the Dong Ho Formation of Oligocene age shows that the Oligocene flora of what is now Vietnam was very similar to its present flora. Kelps make their first appearance in the fossil record during the earliest Oligocene. Fauna Most extant mammal families had appeared by the end of the Oligocene. These included primitive three-toed horses, rhinoceroses, camels, deer, and peccaries. Carnivores such as dogs, nimravids, bears, weasels, and raccoons began to replace the creodonts that had dominated the Paleocene in the Old World. Rodents and rabbits underwent tremendous diversification due to the increase in suitable habitats for ground-dwelling seed eaters, as habitats for squirrel-like nut- and fruit-eaters diminished. The primates, once present in Eurasia, were reduced in range to Africa and South America. Many groups, such as equids, entelodonts, rhinos, merycoidodonts, and camelids, became more able to run during this time, adapting to the plains that were spreading as the Eocene rainforests receded. Brontotheres died out in the Earliest Oligocene, and creodonts died out outside Africa and the Middle East at the end of the period. Multituberculates, an ancient lineage of primitive mammals that originated back in the Jurassic, also became extinct in the Oligocene, aside from the gondwanatheres. The Eocene-Oligocene transition in Europe and Asia has been characterized as the Grande Coupure. The lowering of sea levels closed the Turgai Strait across the Obik Sea, which had previously separated Asia from Europe. This allowed Asian mammals, such as rhinoceroses and ruminants, to enter Europe and drive endemic species to extinction. Lesser faunal turnovers occurred simultaneously with the Oi2 event and towards the end of the Oligocene. There was significant diversification of mammals in Eurasia, including the giant indricotheres, that grew up to at the shoulder and weighed up to 20 tons. Paraceratherium was one of the largest land mammals ever to walk the Earth. However, the indricotheres were an exception to a general tendency for Oligocene mammals to be much smaller than their Eocene counterparts. The earliest deer, giraffes, pigs, and cattle appeared in the mid-Oligocene in Eurasia. The first felid, Proailurus, originated in Asia during the late Oligocene and spread to Europe. There was only limited migration between Asia and North America. The cooling of central North America at the Eocene-Oligocene transition resulted in a large turnover of gastropods, amphibians, and reptiles. Mammals were much less affected. Crocodilians and pond turtles replaced by dry land tortoises. Molluscs shifted to more drought-tolerant forms. The White River Fauna of central North America inhabited a semiarid prairie home and included entelodonts like Archaeotherium, camelids (such as Poebrotherium), running rhinoceratoids, three-toed equids (such as Mesohippus), nimravids, protoceratids, and early canids like Hesperocyon. Merycoidodonts, an endemic American group, were very diverse during this time. Australia and South America became geographically isolated and developed their own distinctive endemic fauna. These included the New World and Old World monkeys. The South American continent was home to animals such as pyrotheres and astrapotheres, as well as litopterns and notoungulates. Sebecosuchians, terror birds, and carnivorous metatheres, like the borhyaenids remained the dominant predators. Africa was also relatively isolated and retained its endemic fauna. These included mastodonts, hyraxes, arsinoitheres, and other archaic forms. Egypt in the Oligocene was an environment of lush forested deltas. Nevertheless, the Early Oligocene saw a major reduction in the diversity of many Afro-Arabian mammal clades, including hyaenodonts, primates, and hystricognath and anomaluroid rodents. During the Oligocene, the Tethyan marine biodiversity hotspot collapsed as the Tethys Ocean contracted. The seas around Southeast Asia and Australia became the new dominant hotspot of marine biodiversity. At sea, 97% of marine snail species, 89% of clams, and 50% of echinoderms of the Gulf Coast did not survive past the earliest Oligocene. New species evolved, but the overall diversity diminished. Cold-water mollusks migrated around the Pacific Rim from Alaska and Siberia. The marine animals of Oligocene oceans resembled today's fauna, such as the bivalves. Calcareous cirratulids appeared in the Oligocene. The Oligocene saw the emergence of parrotfishes, as the centre of marine biodiversity shifted from the Central Tethys eastward into the Indo-Pacific. The fossil record of marine mammals is a little spotty during this time, and not as well known as the Eocene or Miocene, but some fossils have been found. The baleen whales and toothed whales had just appeared, and their ancestors, the archaeocete cetaceans began to decrease in diversity due to their lack of echolocation, which was very useful as the water became colder and cloudier. Other factors to their decline could include climate changes and competition with today's modern cetaceans and the requiem sharks, which also appeared in this epoch. Early desmostylians, like Behemotops, are known from the Oligocene. Pinnipeds appeared near the end of the epoch from an otter-like ancestor. Oceans The Oligocene sees the beginnings of modern ocean circulation, with tectonic shifts causing the opening and closing of ocean gateways. Cooling of the oceans had already commenced by the Eocene/Oligocene boundary, and they continued to cool as the Oligocene progressed. The formation of permanent Antarctic ice sheets during the early Oligocene and possible glacial activity in the Arctic may have influenced this oceanic cooling, though the extent of this influence is still a matter of some significant dispute. Effects of oceanic gateways on circulation The opening and closing of ocean gateways: the opening of the Drake Passage; the opening of the Tasmanian Gateway and the closing of the Tethys seaway; along with the final formation of the Greenland–Iceland–Faroes Ridge; played vital parts in reshaping oceanic currents during the Oligocene. As the continents shifted to a more modern configuration, so too did ocean circulation. Drake Passage The Drake Passage is located between South America and Antarctica. Once the Tasmanian Gateway between Australia and Antarctica opened, all that kept Antarctica from being completely isolated by the Southern Ocean was its connection to South America. As the South American continent moved north, the Drake Passage opened and enabled the formation of the Antarctic Circumpolar Current (ACC), which would have kept the cold waters of Antarctica circulating around that continent and strengthened the formation of Antarctic Bottom Water (ABW). With the cold water concentrated around Antarctica, sea surface temperatures and, consequently, continental temperatures would have dropped. The onset of Antarctic glaciation occurred during the early Oligocene, and the effect of the Drake Passage opening on this glaciation has been the subject of much research. However, some controversy still exists as to the exact timing of the passage opening, whether it occurred at the start of the Oligocene or nearer the end. Even so, many theories agree that at the Eocene/Oligocene (E/O) boundary, a yet shallow flow existed between South America and Antarctica, permitting the start of an Antarctic Circumpolar Current. Stemming from the issue of when the opening of the Drake Passage took place, is the dispute over how great of an influence the opening of the Drake Passage had on the global climate. While early researchers concluded that the advent of the ACC was highly important, perhaps even the trigger, for Antarctic glaciation and subsequent global cooling, other studies have suggested that the δ18O signature is too strong for glaciation to be the main trigger for cooling. Through study of Pacific Ocean sediments, other researchers have shown that the transition from warm Eocene ocean temperatures to cool Oligocene ocean temperatures took only 300,000 years, which strongly implies that feedbacks and factors other than the ACC were integral to the rapid cooling. The latest hypothesized time for the opening of the Drake Passage is during the early Miocene. Despite the shallow flow between South America and Antarctica, there was not enough of a deep water opening to allow for significant flow to create a true Antarctic Circumpolar Current. If the opening occurred as late as hypothesized, then the Antarctic Circumpolar Current could not have had much of an effect on early Oligocene cooling, as it would not have existed. The earliest hypothesized time for the opening of the Drake Passage is around 30 Ma. One of the possible issues with this timing was the continental debris cluttering up the seaway between the two plates in question. This debris, along with what is known as the Shackleton fracture zone, has been shown in a recent study to be fairly young, only about 8 million years old. The study concludes that the Drake Passage would be free to allow significant deep water flow by around 31 Ma. This would have facilitated an earlier onset of the Antarctic Circumpolar Current. There is some evidence that it occurred much earlier, during the early Eocene. Opening of the Tasman Gateway The other major oceanic gateway opening during this time was the Tasman, or Tasmanian, depending on the paper, gateway between Australia and Antarctica. The time frame for this opening is less disputed than the Drake Passage and is largely considered to have occurred around 34 Ma. As the gateway widened, the Antarctic Circumpolar Current strengthened. Tethys Seaway closing The Tethys Seaway was not a gateway, but rather a sea in its own right. Its closing during the Oligocene had significant impact on both ocean circulation and climate. The collisions of the African plate with the European plate and of the Indian subcontinent with the Asian plate, cut off the Tethys Seaway that had provided a low-latitude ocean circulation. The closure of Tethys built some new mountains (the Zagros range) and drew down more carbon dioxide from the atmosphere, contributing to global cooling. Greenland–Iceland–Faroes The gradual separation of the clump of continental crust and the deepening of the tectonic ridge in the North Atlantic that would become Greenland, Iceland, and the Faroe Islands helped to increase the deep water flow in that area. More information about the evolution of North Atlantic Deep Water will be given a few sections down. Ocean cooling Evidence for ocean-wide cooling during the Oligocene exists mostly in isotopic proxies. Patterns of extinction and patterns of species migration can also be studied to gain insight into ocean conditions. For a while, it was thought that the glaciation of Antarctica may have significantly contributed to the cooling of the ocean, however, recent evidence tends to deny this. Deep water Isotopic evidence suggests that during the early Oligocene, the main source of deep water was the North Pacific and the Southern Ocean. As the Greenland-Iceland-Faroe Ridge sank and thereby connected the Norwegian–Greenland sea with the Atlantic Ocean, the deep water of the North Atlantic began to come into play as well. Computer models suggest that once this occurred, a more modern in appearance thermo-haline circulation started. Evidence for the early Oligocene onset of chilled North Atlantic deep water lies in the beginnings of sediment drift deposition in the North Atlantic, such as the Feni and Southeast Faroe drifts. The chilling of the South Ocean deep water began in earnest once the Tasmanian Gateway and the Drake Passage opened fully. Regardless of the time at which the opening of the Drake Passage occurred, the effect on the cooling of the Southern Ocean would have been the same. Impact events Recorded extraterrestrial impacts: Haughton impact crater, Nunavut, Canada (23 Ma, crater diameter; now considered questionable as an Oligocene event; later analyses have concluded the crater dates to 39 Ma, placing the event in the Eocene.) Supervolcanic explosions La Garita Caldera (28–26 million years ago) Wah Wah Springs Caldera (30 million years ago)
Physical sciences
Geological timescale
Earth science
22303
https://en.wikipedia.org/wiki/Oxygen
Oxygen
Oxygen is a chemical element with the symbol O and atomic number 8. It is a member of the chalcogen group in the periodic table, a highly reactive nonmetal, and a potent oxidizing agent that readily forms oxides with most elements as well as with other compounds. Oxygen is the most abundant element in Earth's crust, and the third-most abundant element in the universe after hydrogen and helium. At standard temperature and pressure, two oxygen atoms will bind covalently to form dioxygen, a colorless and odorless diatomic gas with the chemical formula . Dioxygen gas currently constitutes 20.95% molar fraction of the Earth's atmosphere, though this has changed considerably over long periods of time in Earth's history. Oxygen makes up almost half of the Earth's crust in the form of various oxides such as water, carbon dioxide, iron oxides and silicates. All eukaryotic organisms, including plants, animals, fungi, algae and most protists, need oxygen for cellular respiration, which extracts chemical energy by the reaction of oxygen with organic molecules derived from food and releases carbon dioxide as a waste product. In aquatic animals, dissolved oxygen in water is absorbed by specialized respiratory organs called gills, through the skin or via the gut; in terrestrial animals such as tetrapods, oxygen in air is actively taken into the body via specialized organs known as lungs, where gas exchange takes place to diffuse oxygen into the blood and carbon dioxide out, and the body's circulatory system then transports the oxygen to other tissues where cellular respiration takes place. However in insects, the most successful and biodiverse terrestrial clade, oxygen is directly conducted to the internal tissues via a deep network of airways. Many major classes of organic molecules in living organisms contain oxygen atoms, such as proteins, nucleic acids, carbohydrates and fats, as do the major constituent inorganic compounds of animal shells, teeth, and bone. Most of the mass of living organisms is oxygen as a component of water, the major constituent of lifeforms. Oxygen in Earth's atmosphere is produced by biotic photosynthesis, in which photon energy in sunlight is captured by chlorophyll to split water molecules and then react with carbon dioxide to produce carbohydrates and oxygen is released as a byproduct. Oxygen is too chemically reactive to remain a free element in air without being continuously replenished by the photosynthetic activities of autotrophs such as cyanobacteria, chloroplast-bearing algae and plants. A much rarer triatomic allotrope of oxygen, ozone (), strongly absorbs the UVB and UVC wavelengths and forms a protective ozone layer at the lower stratosphere, which shields the biosphere from ionizing ultraviolet radiation. However, ozone present at the surface is a corrosive byproduct of smog and thus an air pollutant. Oxygen was isolated by Michael Sendivogius before 1604, but it is commonly believed that the element was discovered independently by Carl Wilhelm Scheele, in Uppsala, in 1773 or earlier, and Joseph Priestley in Wiltshire, in 1774. Priority is often given for Priestley because his work was published first. Priestley, however, called oxygen "dephlogisticated air", and did not recognize it as a chemical element. The name oxygen was coined in 1777 by Antoine Lavoisier, who first recognized oxygen as a chemical element and correctly characterized the role it plays in combustion. Common industrial uses of oxygen include production of steel, plastics and textiles, brazing, welding and cutting of steels and other metals, rocket propellant, oxygen therapy, and life support systems in aircraft, submarines, spaceflight and diving. History of study Early experiments One of the first known experiments on the relationship between combustion and air was conducted by the 2nd century BCE Greek writer on mechanics, Philo of Byzantium. In his work Pneumatica, Philo observed that inverting a vessel over a burning candle and surrounding the vessel's neck with water resulted in some water rising into the neck. Philo incorrectly surmised that parts of the air in the vessel were converted into the classical element fire and thus were able to escape through pores in the glass. Many centuries later Leonardo da Vinci built on Philo's work by observing that a portion of air is consumed during combustion and respiration. In the late 17th century, Robert Boyle proved that air is necessary for combustion. English chemist John Mayow (1641–1679) refined this work by showing that fire requires only a part of air that he called spiritus nitroaereus. In one experiment, he found that placing either a mouse or a lit candle in a closed container over water caused the water to rise and replace one-fourteenth of the air's volume before extinguishing the subjects. From this, he surmised that nitroaereus is consumed in both respiration and combustion. Mayow observed that antimony increased in weight when heated, and inferred that the nitroaereus must have combined with it. He also thought that the lungs separate nitroaereus from air and pass it into the blood and that animal heat and muscle movement result from the reaction of nitroaereus with certain substances in the body. Accounts of these and other experiments and ideas were published in 1668 in his work Tractatus duo in the tract "De respiratione". Phlogiston theory Robert Hooke, Ole Borch, Mikhail Lomonosov, and Pierre Bayen all produced oxygen in experiments in the 17th and the 18th century but none of them recognized it as a chemical element. This may have been in part due to the prevalence of the philosophy of combustion and corrosion called the phlogiston theory, which was then the favored explanation of those processes. Established in 1667 by the German alchemist J. J. Becher, and modified by the chemist Georg Ernst Stahl by 1731, phlogiston theory stated that all combustible materials were made of two parts. One part, called phlogiston, was given off when the substance containing it was burned, while the dephlogisticated part was thought to be its true form, or calx. Highly combustible materials that leave little residue, such as wood or coal, were thought to be made mostly of phlogiston; non-combustible substances that corrode, such as iron, contained very little. Air did not play a role in phlogiston theory, nor were any initial quantitative experiments conducted to test the idea; instead, it was based on observations of what happens when something burns, that most common objects appear to become lighter and seem to lose something in the process. Discovery Polish alchemist, philosopher, and physician Michael Sendivogius (Michał Sędziwój) in his work De Lapide Philosophorum Tractatus duodecim e naturae fonte et manuali experientia depromti ["Twelve Treatises on the Philosopher's Stone drawn from the source of nature and manual experience"] (1604) described a substance contained in air, referring to it as 'cibus vitae' (food of life,) and according to Polish historian Roman Bugaj, this substance is identical with oxygen. Sendivogius, during his experiments performed between 1598 and 1604, properly recognized that the substance is equivalent to the gaseous byproduct released by the thermal decomposition of potassium nitrate. In Bugaj's view, the isolation of oxygen and the proper association of the substance to that part of air which is required for life, provides sufficient evidence for the discovery of oxygen by Sendivogius. This discovery of Sendivogius was however frequently denied by the generations of scientists and chemists which succeeded him. It is also commonly claimed that oxygen was first discovered by Swedish pharmacist Carl Wilhelm Scheele. He had produced oxygen gas by heating mercuric oxide (HgO) and various nitrates in 1771–72. Scheele called the gas "fire air" because it was then the only known agent to support combustion. He wrote an account of this discovery in a manuscript titled Treatise on Air and Fire, which he sent to his publisher in 1775. That document was published in 1777. In the meantime, on August 1, 1774, an experiment conducted by the British clergyman Joseph Priestley focused sunlight on mercuric oxide contained in a glass tube, which liberated a gas he named "dephlogisticated air". He noted that candles burned brighter in the gas and that a mouse was more active and lived longer while breathing it. After breathing the gas himself, Priestley wrote: "The feeling of it to my lungs was not sensibly different from that of common air, but I fancied that my breast felt peculiarly light and easy for some time afterwards." Priestley published his findings in 1775 in a paper titled "An Account of Further Discoveries in Air", which was included in the second volume of his book titled Experiments and Observations on Different Kinds of Air. Because he published his findings first, Priestley is usually given priority in the discovery. The French chemist Antoine Laurent Lavoisier later claimed to have discovered the new substance independently. Priestley visited Lavoisier in October 1774 and told him about his experiment and how he liberated the new gas. Scheele had also dispatched a letter to Lavoisier on September 30, 1774, which described his discovery of the previously unknown substance, but Lavoisier never acknowledged receiving it (a copy of the letter was found in Scheele's belongings after his death). Lavoisier's contribution Lavoisier conducted the first adequate quantitative experiments on oxidation and gave the first correct explanation of how combustion works. He used these and similar experiments, all started in 1774, to discredit the phlogiston theory and to prove that the substance discovered by Priestley and Scheele was a chemical element. In one experiment, Lavoisier observed that there was no overall increase in weight when tin and air were heated in a closed container. He noted that air rushed in when he opened the container, which indicated that part of the trapped air had been consumed. He also noted that the tin had increased in weight and that increase was the same as the weight of the air that rushed back in. This and other experiments on combustion were documented in his book Sur la combustion en général, which was published in 1777. In that work, he proved that air is a mixture of two gases; 'vital air', which is essential to combustion and respiration, and azote (Gk. "lifeless"), which did not support either. Azote later became nitrogen in English, although it has kept the earlier name in French and several other European languages. Etymology Lavoisier renamed 'vital air' to oxygène in 1777 from the Greek roots (oxys) (acid, literally 'sharp', from the taste of acids) and -γενής (-genēs) (producer, literally begetter), because he mistakenly believed that oxygen was a constituent of all acids. Chemists (such as Sir Humphry Davy in 1812) eventually determined that Lavoisier was wrong in this regard (e.g. Hydrogen chloride (HCl) is a strong acid that doesn't contain oxygen), but by then the name was too well established. Oxygen entered the English language despite opposition by English scientists and the fact that the Englishman Priestley had first isolated the gas and written about it. This is partly due to a poem praising the gas titled "Oxygen" in the popular book The Botanic Garden (1791) by Erasmus Darwin, grandfather of Charles Darwin. Later history John Dalton's original atomic hypothesis presumed that all elements were monatomic and that the atoms in compounds would normally have the simplest atomic ratios with respect to one another. For example, Dalton assumed that water's formula was HO, leading to the conclusion that the atomic mass of oxygen was 8 times that of hydrogen, instead of the modern value of about 16. In 1805, Joseph Louis Gay-Lussac and Alexander von Humboldt showed that water is formed of two volumes of hydrogen and one volume of oxygen; and by 1811 Amedeo Avogadro had arrived at the correct interpretation of water's composition, based on what is now called Avogadro's law and the diatomic elemental molecules in those gases. The first commercial method of producing oxygen was chemical, the so-called Brin process involving a reversible reaction of barium oxide. It was invented in 1852 and commercialized in 1884, but was displaced by newer methods in early 20th century. By the late 19th century scientists realized that air could be liquefied and its components isolated by compressing and cooling it. Using a cascade method, Swiss chemist and physicist Raoul Pierre Pictet evaporated liquid sulfur dioxide in order to liquefy carbon dioxide, which in turn was evaporated to cool oxygen gas enough to liquefy it. He sent a telegram on December 22, 1877, to the French Academy of Sciences in Paris announcing his discovery of liquid oxygen. Just two days later, French physicist Louis Paul Cailletet announced his own method of liquefying molecular oxygen. Only a few drops of the liquid were produced in each case and no meaningful analysis could be conducted. Oxygen was liquefied in a stable state for the first time on March 29, 1883, by Polish scientists from Jagiellonian University, Zygmunt Wróblewski and Karol Olszewski. In 1891 Scottish chemist James Dewar was able to produce enough liquid oxygen for study. The first commercially viable process for producing liquid oxygen was independently developed in 1895 by German engineer Carl von Linde and British engineer William Hampson. Both men lowered the temperature of air until it liquefied and then distilled the component gases by boiling them off one at a time and capturing them separately. Later, in 1901, oxyacetylene welding was demonstrated for the first time by burning a mixture of acetylene and compressed . This method of welding and cutting metal later became common. In 1923, the American scientist Robert H. Goddard became the first person to develop a rocket engine that burned liquid fuel; the engine used gasoline for fuel and liquid oxygen as the oxidizer. Goddard successfully flew a small liquid-fueled rocket 56 m at 97 km/h on March 16, 1926, in Auburn, Massachusetts, US. In academic laboratories, oxygen can be prepared by heating together potassium chlorate mixed with a small proportion of manganese dioxide. Oxygen levels in the atmosphere are trending slightly downward globally, possibly because of fossil-fuel burning. Characteristics Properties and molecular structure At standard temperature and pressure, oxygen is a colorless, odorless, and tasteless gas with the molecular formula , referred to as dioxygen. As dioxygen, two oxygen atoms are chemically bound to each other. The bond can be variously described based on level of theory, but is reasonably and simply described as a covalent double bond that results from the filling of molecular orbitals formed from the atomic orbitals of the individual oxygen atoms, the filling of which results in a bond order of two. More specifically, the double bond is the result of sequential, low-to-high energy, or Aufbau, filling of orbitals, and the resulting cancellation of contributions from the 2s electrons, after sequential filling of the low σ and σ* orbitals; σ overlap of the two atomic 2p orbitals that lie along the O–O molecular axis and π overlap of two pairs of atomic 2p orbitals perpendicular to the O–O molecular axis, and then cancellation of contributions from the remaining two 2p electrons after their partial filling of the π* orbitals. This combination of cancellations and σ and π overlaps results in dioxygen's double-bond character and reactivity, and a triplet electronic ground state. An electron configuration with two unpaired electrons, as is found in dioxygen orbitals (see the filled π* orbitals in the diagram) that are of equal energy—i.e., degenerate—is a configuration termed a spin triplet state. Hence, the ground state of the molecule is referred to as triplet oxygen. The highest-energy, partially filled orbitals are antibonding, and so their filling weakens the bond order from three to two. Because of its unpaired electrons, triplet oxygen reacts only slowly with most organic molecules, which have paired electron spins; this prevents spontaneous combustion. In the triplet form, molecules are paramagnetic. That is, they impart magnetic character to oxygen when it is in the presence of a magnetic field, because of the spin magnetic moments of the unpaired electrons in the molecule, and the negative exchange energy between neighboring molecules. Liquid oxygen is so magnetic that, in laboratory demonstrations, a bridge of liquid oxygen may be supported against its own weight between the poles of a powerful magnet. Singlet oxygen is a name given to several higher-energy species of molecular in which all the electron spins are paired. It is much more reactive with common organic molecules than is normal (triplet) molecular oxygen. In nature, singlet oxygen is commonly formed from water during photosynthesis, using the energy of sunlight. It is also produced in the troposphere by the photolysis of ozone by light of short wavelength and by the immune system as a source of active oxygen. Carotenoids in photosynthetic organisms (and possibly animals) play a major role in absorbing energy from singlet oxygen and converting it to the unexcited ground state before it can cause harm to tissues. Allotropes The common allotrope of elemental oxygen on Earth is called dioxygen, , the major part of the Earth's atmospheric oxygen (see Occurrence). O2 has a bond length of 121 pm and a bond energy of 498 kJ/mol. O2 is used by complex forms of life, such as animals, in cellular respiration. Other aspects of are covered in the remainder of this article. Trioxygen () is usually known as ozone and is a very reactive allotrope of oxygen that is damaging to lung tissue. Ozone is produced in the upper atmosphere when combines with atomic oxygen made by the splitting of by ultraviolet (UV) radiation. Since ozone absorbs strongly in the UV region of the spectrum, the ozone layer of the upper atmosphere functions as a protective radiation shield for the planet. Near the Earth's surface, it is a pollutant formed as a by-product of automobile exhaust. At low earth orbit altitudes, sufficient atomic oxygen is present to cause corrosion of spacecraft. The metastable molecule tetraoxygen () was discovered in 2001, and was assumed to exist in one of the six phases of solid oxygen. It was proven in 2006 that this phase, created by pressurizing to 20 GPa, is in fact a rhombohedral cluster. This cluster has the potential to be a much more powerful oxidizer than either or and may therefore be used in rocket fuel. A metallic phase was discovered in 1990 when solid oxygen is subjected to a pressure of above 96 GPa and it was shown in 1998 that at very low temperatures, this phase becomes superconducting. Physical properties Oxygen dissolves more readily in water than nitrogen, and in freshwater more readily than in seawater. Water in equilibrium with air contains approximately 1 molecule of dissolved for every 2 molecules of (1:2), compared with an atmospheric ratio of approximately 1:4. The solubility of oxygen in water is temperature-dependent, and about twice as much () dissolves at 0 °C than at 20 °C (). At 25 °C and of air, freshwater can dissolve about 6.04 milliliters (mL) of oxygen per liter, and seawater contains about 4.95 mL per liter. At 5 °C the solubility increases to 9.0 mL (50% more than at 25 °C) per liter for freshwater and 7.2 mL (45% more) per liter for sea water. Oxygen condenses at 90.20 K (−182.95 °C, −297.31 °F) and freezes at 54.36 K (−218.79 °C, −361.82 °F). Both liquid and solid are clear substances with a light sky-blue color caused by absorption in the red (in contrast with the blue color of the sky, which is due to Rayleigh scattering of blue light). High-purity liquid is usually obtained by the fractional distillation of liquefied air. Liquid oxygen may also be condensed from air using liquid nitrogen as a coolant. Liquid oxygen is a highly reactive substance and must be segregated from combustible materials. The spectroscopy of molecular oxygen is associated with the atmospheric processes of aurora and airglow. The absorption in the Herzberg continuum and Schumann–Runge bands in the ultraviolet produces atomic oxygen that is important in the chemistry of the middle atmosphere. Excited-state singlet molecular oxygen is responsible for red chemiluminescence in solution. Table of thermal and physical properties of oxygen (O2) at atmospheric pressure: Isotopes and stellar origin Naturally occurring oxygen is composed of three stable isotopes, 16O, 17O, and 18O, with 16O being the most abundant (99.762% natural abundance). Most 16O is synthesized at the end of the helium fusion process in massive stars but some is made in the neon burning process. 17O is primarily made by the burning of hydrogen into helium during the CNO cycle, making it a common isotope in the hydrogen burning zones of stars. Most 18O is produced when 14N (made abundant from CNO burning) captures a 4He nucleus, making 18O common in the helium-rich zones of evolved, massive stars. Fifteen radioisotopes have been characterized, ranging from 11O to 28O. The most stable are 15O with a half-life of 122.24 seconds and 14O with a half-life of 70.606 seconds. All of the remaining radioactive isotopes have half-lives that are less than 27 seconds and the majority of these have half-lives that are less than 83 milliseconds. The most common decay mode of the isotopes lighter than 16O is β+ decay to yield nitrogen, and the most common mode for the isotopes heavier than 18O is beta decay to yield fluorine. Occurrence Oxygen is the most abundant chemical element by mass in the Earth's biosphere, air, sea and land. Oxygen is the third most abundant chemical element in the universe, after hydrogen and helium. About 0.9% of the Sun's mass is oxygen. Oxygen constitutes 49.2% of the Earth's crust by mass as part of oxide compounds such as silicon dioxide and is the most abundant element by mass in the Earth's crust. It is also the major component of the world's oceans (88.8% by mass). Oxygen gas is the second most common component of the Earth's atmosphere, taking up 20.8% of its volume and 23.1% of its mass (some 1015 tonnes). Earth is unusual among the planets of the Solar System in having such a high concentration of oxygen gas in its atmosphere: Mars (with 0.1% by volume) and Venus have much less. The surrounding those planets is produced solely by the action of ultraviolet radiation on oxygen-containing molecules such as carbon dioxide. The unusually high concentration of oxygen gas on Earth is the result of the oxygen cycle. This biogeochemical cycle describes the movement of oxygen within and between its three main reservoirs on Earth: the atmosphere, the biosphere, and the lithosphere. The main driving factor of the oxygen cycle is photosynthesis, which is responsible for modern Earth's atmosphere. Photosynthesis releases oxygen into the atmosphere, while respiration, decay, and combustion remove it from the atmosphere. In the present equilibrium, production and consumption occur at the same rate. Free oxygen also occurs in solution in the world's water bodies. The increased solubility of at lower temperatures (see Physical properties) has important implications for ocean life, as polar oceans support a much higher density of life due to their higher oxygen content. Water polluted with plant nutrients such as nitrates or phosphates may stimulate growth of algae by a process called eutrophication and the decay of these organisms and other biomaterials may reduce the content in eutrophic water bodies. Scientists assess this aspect of water quality by measuring the water's biochemical oxygen demand, or the amount of needed to restore it to a normal concentration. Analysis Paleoclimatologists measure the ratio of oxygen-18 and oxygen-16 in the shells and skeletons of marine organisms to determine the climate millions of years ago (see oxygen isotope ratio cycle). Seawater molecules that contain the lighter isotope, oxygen-16, evaporate at a slightly faster rate than water molecules containing the 12% heavier oxygen-18, and this disparity increases at lower temperatures. During periods of lower global temperatures, snow and rain from that evaporated water tends to be higher in oxygen-16, and the seawater left behind tends to be higher in oxygen-18. Marine organisms then incorporate more oxygen-18 into their skeletons and shells than they would in a warmer climate. Paleoclimatologists also directly measure this ratio in the water molecules of ice core samples as old as hundreds of thousands of years. Planetary geologists have measured the relative quantities of oxygen isotopes in samples from the Earth, the Moon, Mars, and meteorites, but were long unable to obtain reference values for the isotope ratios in the Sun, believed to be the same as those of the primordial solar nebula. Analysis of a silicon wafer exposed to the solar wind in space and returned by the crashed Genesis spacecraft has shown that the Sun has a higher proportion of oxygen-16 than does the Earth. The measurement implies that an unknown process depleted oxygen-16 from the Sun's disk of protoplanetary material prior to the coalescence of dust grains that formed the Earth. Oxygen presents two spectrophotometric absorption bands peaking at the wavelengths 687 and 760 nm. Some remote sensing scientists have proposed using the measurement of the radiance coming from vegetation canopies in those bands to characterize plant health status from a satellite platform. This approach exploits the fact that in those bands it is possible to discriminate the vegetation's reflectance from its fluorescence, which is much weaker. The measurement is technically difficult owing to the low signal-to-noise ratio and the physical structure of vegetation; but it has been proposed as a possible method of monitoring the carbon cycle from satellites on a global scale. Biological production and role of O2 Photosynthesis and respiration In nature, free oxygen is produced by the light-driven splitting of water during oxygenic photosynthesis. According to some estimates, green algae and cyanobacteria in marine environments provide about 70% of the free oxygen produced on Earth, and the rest is produced by terrestrial plants. Other estimates of the oceanic contribution to atmospheric oxygen are higher, while some estimates are lower, suggesting oceans produce ~45% of Earth's atmospheric oxygen each year. A simplified overall formula for photosynthesis is 6 + 6 + photons → + 6 or simply carbon dioxide + water + sunlight → glucose + dioxygen Photolytic oxygen evolution occurs in the thylakoid membranes of photosynthetic organisms and requires the energy of four photons. Many steps are involved, but the result is the formation of a proton gradient across the thylakoid membrane, which is used to synthesize adenosine triphosphate (ATP) via photophosphorylation. The remaining (after production of the water molecule) is released into the atmosphere. Oxygen is used in mitochondria in the generation of ATP during oxidative phosphorylation. The reaction for aerobic respiration is essentially the reverse of photosynthesis and is simplified as + 6 → 6 + 6 + 2880 kJ/mol In vertebrates, diffuses through membranes in the lungs and into red blood cells. Hemoglobin binds , changing color from bluish red to bright red ( is released from another part of hemoglobin through the Bohr effect). Other animals use hemocyanin (molluscs and some arthropods) or hemerythrin (spiders and lobsters). A liter of blood can dissolve 200 cm3 of . Until the discovery of anaerobic metazoa, oxygen was thought to be a requirement for all complex life. Reactive oxygen species, such as superoxide ion () and hydrogen peroxide (), are reactive by-products of oxygen use in organisms. Parts of the immune system of higher organisms create peroxide, superoxide, and singlet oxygen to destroy invading microbes. Reactive oxygen species also play an important role in the hypersensitive response of plants against pathogen attack. Oxygen is damaging to obligately anaerobic organisms, which were the dominant form of early life on Earth until began to accumulate in the atmosphere about 2.5 billion years ago during the Great Oxygenation Event, about a billion years after the first appearance of these organisms. An adult human at rest inhales 1.8 to 2.4 grams of oxygen per minute. This amounts to more than 6 billion tonnes of oxygen inhaled by humanity per year. Living organisms The free oxygen partial pressure in the body of a living vertebrate organism is highest in the respiratory system, and decreases along any arterial system, peripheral tissues, and venous system, respectively. Partial pressure is the pressure that oxygen would have if it alone occupied the volume. Build-up in the atmosphere Free oxygen gas was almost nonexistent in Earth's atmosphere before photosynthetic archaea and bacteria evolved, probably about 3.5 billion years ago. Free oxygen first appeared in significant quantities during the Paleoproterozoic era (between 3.0 and 2.3 billion years ago). Even if there was much dissolved iron in the oceans when oxygenic photosynthesis was getting more common, it appears the banded iron formations were created by anoxyenic or micro-aerophilic iron-oxidizing bacteria which dominated the deeper areas of the photic zone, while oxygen-producing cyanobacteria covered the shallows. Free oxygen began to outgas from the oceans 3–2.7 billion years ago, reaching 10% of its present level around 1.7 billion years ago. The presence of large amounts of dissolved and free oxygen in the oceans and atmosphere may have driven most of the extant anaerobic organisms to extinction during the Great Oxygenation Event (oxygen catastrophe) about 2.4 billion years ago. Cellular respiration using enables aerobic organisms to produce much more ATP than anaerobic organisms. Cellular respiration of occurs in all eukaryotes, including all complex multicellular organisms such as plants and animals. Since the beginning of the Cambrian period 540 million years ago, atmospheric levels have fluctuated between 15% and 30% by volume. Towards the end of the Carboniferous period (about 300 million years ago) atmospheric levels reached a maximum of 35% by volume, which may have contributed to the large size of insects and amphibians at this time. Variations in atmospheric oxygen concentration have shaped past climates. When oxygen declined, atmospheric density dropped, which in turn increased surface evaporation, causing precipitation increases and warmer temperatures. At the current rate of photosynthesis it would take about 2,000 years to regenerate the entire in the present atmosphere. It is estimated that oxygen on Earth will last for about one billion years. Extraterrestrial free oxygen In the field of astrobiology and in the search for extraterrestrial life oxygen is a strong biosignature. That said it might not be a definite biosignature, being possibly produced abiotically on celestial bodies with processes and conditions (such as a peculiar hydrosphere) which allow free oxygen, like with Europa's and Ganymede's thin oxygen atmospheres. Industrial production One hundred million tonnes of are extracted from air for industrial uses annually by two primary methods. The most common method is fractional distillation of liquefied air, with distilling as a vapor while is left as a liquid. The other primary method of producing is passing a stream of clean, dry air through one bed of a pair of identical zeolite molecular sieves, which absorbs the nitrogen and delivers a gas stream that is 90% to 93% . Simultaneously, nitrogen gas is released from the other nitrogen-saturated zeolite bed, by reducing the chamber operating pressure and diverting part of the oxygen gas from the producer bed through it, in the reverse direction of flow. After a set cycle time the operation of the two beds is interchanged, thereby allowing for a continuous supply of gaseous oxygen to be pumped through a pipeline. This is known as pressure swing adsorption. Oxygen gas is increasingly obtained by these non-cryogenic technologies (see also the related vacuum swing adsorption). Oxygen gas can also be produced through electrolysis of water into molecular oxygen and hydrogen. DC electricity must be used: if AC is used, the gases in each limb consist of hydrogen and oxygen in the explosive ratio 2:1. A similar method is the electrocatalytic evolution from oxides and oxoacids. Chemical catalysts can be used as well, such as in chemical oxygen generators or oxygen candles that are used as part of the life-support equipment on submarines, and are still part of standard equipment on commercial airliners in case of depressurization emergencies. Another air separation method is forcing air to dissolve through ceramic membranes based on zirconium dioxide by either high pressure or an electric current, to produce nearly pure gas. Storage Oxygen storage methods include high-pressure oxygen tanks, cryogenics and chemical compounds. For reasons of economy, oxygen is often transported in bulk as a liquid in specially insulated tankers, since one liter of liquefied oxygen is equivalent to 840 liters of gaseous oxygen at atmospheric pressure and . Such tankers are used to refill bulk liquid-oxygen storage containers, which stand outside hospitals and other institutions that need large volumes of pure oxygen gas. Liquid oxygen is passed through heat exchangers, which convert the cryogenic liquid into gas before it enters the building. Oxygen is also stored and shipped in smaller cylinders containing the compressed gas; a form that is useful in certain portable medical applications and oxy-fuel welding and cutting. Applications Medical Uptake of from the air is the essential purpose of respiration, so oxygen supplementation is used in medicine. Treatment not only increases oxygen levels in the patient's blood, but has the secondary effect of decreasing resistance to blood flow in many types of diseased lungs, easing work load on the heart. Oxygen therapy is used to treat emphysema, pneumonia, some heart disorders (congestive heart failure), some disorders that cause increased pulmonary artery pressure, and any disease that impairs the body's ability to take up and use gaseous oxygen. Treatments are flexible enough to be used in hospitals, the patient's home, or increasingly by portable devices. Oxygen tents were once commonly used in oxygen supplementation, but have since been replaced mostly by the use of oxygen masks or nasal cannulas. Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the 'bends') are sometimes addressed with this therapy. Increased concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and helium, forming in the blood. Increasing the pressure of as soon as possible helps to redissolve the bubbles back into the blood so that these excess gasses can be exhaled naturally through the lungs. Normobaric oxygen administration at the highest available concentration is frequently used as first aid for any diving injury that may involve inert gas bubble formation in the tissues. There is epidemiological support for its use from a statistical study of cases recorded in a long term database. Life support and recreational use An application of as a low-pressure breathing gas is in modern space suits, which surround their occupant's body with the breathing gas. These devices use nearly pure oxygen at about one-third normal pressure, resulting in a normal blood partial pressure of . This trade-off of higher oxygen concentration for lower pressure is needed to maintain suit flexibility. Scuba and surface-supplied underwater divers and submarines also rely on artificially delivered . Submarines, submersibles and atmospheric diving suits usually operate at normal atmospheric pressure. Breathing air is scrubbed of carbon dioxide by chemical extraction and oxygen is replaced to maintain a constant partial pressure. Ambient pressure divers breathe air or gas mixtures with an oxygen fraction suited to the operating depth. Pure or nearly pure use in diving at pressures higher than atmospheric is usually limited to rebreathers, or decompression at relatively shallow depths (~6 meters depth, or less), or medical treatment in recompression chambers at pressures up to 2.8 bar, where acute oxygen toxicity can be managed without the risk of drowning. Deeper diving requires significant dilution of with other gases, such as nitrogen or helium, to prevent oxygen toxicity. People who climb mountains or fly in non-pressurized fixed-wing aircraft sometimes have supplemental supplies. Pressurized commercial airplanes have an emergency supply of automatically supplied to the passengers in case of cabin depressurization. Sudden cabin pressure loss activates chemical oxygen generators above each seat, causing oxygen masks to drop. Pulling on the masks "to start the flow of oxygen" as cabin safety instructions dictate, forces iron filings into the sodium chlorate inside the canister. A steady stream of oxygen gas is then produced by the exothermic reaction. Oxygen, as a mild euphoric, has a history of recreational use in oxygen bars and in sports. Oxygen bars are establishments found in the United States since the late 1990s that offer higher than normal exposure for a minimal fee. Professional athletes, especially in American football, sometimes go off-field between plays to don oxygen masks to boost performance. The pharmacological effect is doubted; a placebo effect is a more likely explanation. Available studies support a performance boost from oxygen enriched mixtures only if it is inhaled during aerobic exercise. Other recreational uses that do not involve breathing include pyrotechnic applications, such as George Goble's five-second ignition of barbecue grills. Industrial Smelting of iron ore into steel consumes 55% of commercially produced oxygen. In this process, is injected through a high-pressure lance into molten iron, which removes sulfur impurities and excess carbon as the respective oxides, and . The reactions are exothermic, so the temperature increases to 1,700 °C. Another 25% of commercially produced oxygen is used by the chemical industry. Ethylene is reacted with to create ethylene oxide, which, in turn, is converted into ethylene glycol; the primary feeder material used to manufacture a host of products, including antifreeze and polyester polymers (the precursors of many plastics and fabrics). Most of the remaining 20% of commercially produced oxygen is used in medical applications, metal cutting and welding, as an oxidizer in rocket fuel, and in water treatment. Oxygen is used in oxyacetylene welding, burning acetylene with to produce a very hot flame. In this process, metal up to thick is first heated with a small oxy-acetylene flame and then quickly cut by a large stream of . Compounds The oxidation state of oxygen is −2 in almost all known compounds of oxygen. The oxidation state −1 is found in a few compounds such as peroxides. Compounds containing oxygen in other oxidation states are very uncommon: −1/2 (superoxides), −1/3 (ozonides), 0 (elemental, hypofluorous acid), +1/2 (dioxygenyl), +1 (dioxygen difluoride), and +2 (oxygen difluoride). Oxides and other inorganic compounds Water () is an oxide of hydrogen and the most familiar oxygen compound. Hydrogen atoms are covalently bonded to oxygen in a water molecule but also have an additional attraction (about 23.3 kJ/mol per hydrogen atom) to an adjacent oxygen atom in a separate molecule. These hydrogen bonds between water molecules hold them approximately 15% closer than what would be expected in a simple liquid with just van der Waals forces. Due to its electronegativity, oxygen forms chemical bonds with almost all other elements to give corresponding oxides. The surface of most metals, such as aluminium and titanium, are oxidized in the presence of air and become coated with a thin film of oxide that passivates the metal and slows further corrosion. Many oxides of the transition metals are non-stoichiometric compounds, with slightly less metal than the chemical formula would show. For example, the mineral FeO (wüstite) is written as , where x is usually around 0.05. Oxygen is present in the atmosphere in trace quantities in the form of carbon dioxide (). The Earth's crustal rock is composed in large part of oxides of silicon (silica , as found in granite and quartz), aluminium (aluminium oxide , in bauxite and corundum), iron (iron(III) oxide , in hematite and rust), and calcium carbonate (in limestone). The rest of the Earth's crust is also made of oxygen compounds, in particular various complex silicates (in silicate minerals). The Earth's mantle, of much larger mass than the crust, is largely composed of silicates of magnesium and iron. Water-soluble silicates in the form of , , and are used as detergents and adhesives. Oxygen also acts as a ligand for transition metals, forming transition metal dioxygen complexes, which feature metal–. This class of compounds includes the heme proteins hemoglobin and myoglobin. An exotic and unusual reaction occurs with , which oxidizes oxygen to give O2+PtF6−, dioxygenyl hexafluoroplatinate. Organic compounds Among the most important classes of organic compounds that contain oxygen are (where "R" is an organic group): alcohols (R-OH); ethers (R-O-R); ketones (R-CO-R); aldehydes (R-CO-H); carboxylic acids (R-COOH); esters (R-COO-R); acid anhydrides (R-CO-O-CO-R); and amides (). There are many important organic solvents that contain oxygen, including: acetone, methanol, ethanol, isopropanol, furan, THF, diethyl ether, dioxane, ethyl acetate, DMF, DMSO, acetic acid, and formic acid. Acetone () and phenol () are used as feeder materials in the synthesis of many different substances. Other important organic compounds that contain oxygen are: glycerol, formaldehyde, glutaraldehyde, citric acid, acetic anhydride, and acetamide. Epoxides are ethers in which the oxygen atom is part of a ring of three atoms. The element is similarly found in almost all biomolecules that are important to (or generated by) life. Oxygen reacts spontaneously with many organic compounds at or below room temperature in a process called autoxidation. Most of the organic compounds that contain oxygen are not made by direct action of . Organic compounds important in industry and commerce that are made by direct oxidation of a precursor include ethylene oxide and peracetic acid. Safety and precautions The NFPA 704 standard rates compressed oxygen gas as nonhazardous to health, nonflammable and nonreactive, but an oxidizer. Refrigerated liquid oxygen (LOX) is given a health hazard rating of 3 (for increased risk of hyperoxia from condensed vapors, and for hazards common to cryogenic liquids such as frostbite), and all other ratings are the same as the compressed gas form. Toxicity Oxygen gas () can be toxic at elevated partial pressures, leading to convulsions and other health problems. Oxygen toxicity usually begins to occur at partial pressures more than 50 kilopascals (kPa), equal to about 50% oxygen composition at standard pressure or 2.5 times the normal sea-level partial pressure of about 21 kPa. This is not a problem except for patients on mechanical ventilators, since gas supplied through oxygen masks in medical applications is typically composed of only 30–50% by volume (about 30 kPa at standard pressure). At one time, premature babies were placed in incubators containing -rich air, but this practice was discontinued after some babies were blinded by the oxygen content being too high. Breathing pure in space applications, such as in some modern space suits, or in early spacecraft such as Apollo, causes no damage due to the low total pressures used. In the case of spacesuits, the partial pressure in the breathing gas is, in general, about 30 kPa (1.4 times normal), and the resulting partial pressure in the astronaut's arterial blood is only marginally more than normal sea-level partial pressure. Oxygen toxicity to the lungs and central nervous system can also occur in deep scuba diving and surface-supplied diving. Prolonged breathing of an air mixture with an partial pressure more than 60 kPa can eventually lead to permanent pulmonary fibrosis. Exposure to an partial pressure greater than 160 kPa (about 1.6 atm) may lead to convulsions (normally fatal for divers). Acute oxygen toxicity (causing seizures, its most feared effect for divers) can occur by breathing an air mixture with 21% at or more of depth; the same thing can occur by breathing 100% at only . Combustion and other hazards Highly concentrated sources of oxygen promote rapid combustion. Fire and explosion hazards exist when concentrated oxidants and fuels are brought into close proximity; an ignition event, such as heat or a spark, is needed to trigger combustion. Oxygen is the oxidant, not the fuel. Concentrated will allow combustion to proceed rapidly and energetically. Steel pipes and storage vessels used to store and transmit both gaseous and liquid oxygen will act as a fuel; and therefore the design and manufacture of systems requires special training to ensure that ignition sources are minimized. The fire that killed the Apollo 1 crew in a launch pad test spread so rapidly because the capsule was pressurized with pure but at slightly more than atmospheric pressure, instead of the normal pressure that would be used in a mission. Liquid oxygen spills, if allowed to soak into organic matter, such as wood, petrochemicals, and asphalt can cause these materials to detonate unpredictably on subsequent mechanical impact.
Physical sciences
Chemistry
null
22304
https://en.wikipedia.org/wiki/Osmium
Osmium
Osmium () is a chemical element; it has symbol Os and atomic number 76. It is a hard, brittle, bluish-white transition metal in the platinum group that is found as a trace element in alloys, mostly in platinum ores. Osmium is the densest naturally occurring element. When experimentally measured using X-ray crystallography, it has a density of . Manufacturers use its alloys with platinum, iridium, and other platinum-group metals to make fountain pen nib tipping, electrical contacts, and in other applications that require extreme durability and hardness. Osmium is among the rarest elements in the Earth's crust, making up only 50 parts per trillion (ppt). Characteristics Physical properties Osmium is a hard, brittle, blue-gray metal, and the densest stable element—about twice as dense as lead. The density of osmium is slightly greater than that of iridium; the two are so similar (22.587 versus at 20 °C) that each was at one time considered to be the densest element. Only in the 1990s were measurements made accurately enough (by means of X-ray crystallography) to be certain that osmium is the denser of the two. Osmium has a blue-gray tint. The reflectivity of single crystals of osmium is complex and strongly direction-dependent, with light in the red and near-infrared wavelengths being more strongly absorbed when polarized parallel to the c crystal axis than when polarized perpendicular to the c axis; the c-parallel polarization is also slightly more reflected in the mid-ultraviolet range. Reflectivity reaches a sharp minimum at around 1.5 eV (near-infrared) for the c-parallel polarization and at 2.0 eV (orange) for the c-perpendicular polarization, and peaks for both in the visible spectrum at around 3.0 eV (blue-violet). Osmium is a hard but brittle metal that remains lustrous even at high temperatures. It has a very low compressibility. Correspondingly, its bulk modulus is extremely high, reported between and , which rivals that of diamond (). The hardness of osmium is moderately high at . Because of its hardness, brittleness, low vapor pressure (the lowest of the platinum-group metals), and very high melting point (the fourth highest of all elements, after carbon, tungsten, and rhenium), solid osmium is difficult to machine, form, or work. Chemical properties Osmium forms compounds with oxidation states ranging from −4 to +8. The most common oxidation states are +2, +3, +4, and +8. The +8 oxidation state is notable for being the highest attained by any chemical element aside from iridium's +9 and is encountered only in xenon, ruthenium, hassium, iridium, and plutonium. The oxidation states −1 and −2 represented by the two reactive compounds and are used in the synthesis of osmium cluster compounds. The most common compound exhibiting the +8 oxidation state is osmium tetroxide (). This toxic compound is formed when powdered osmium is exposed to air. It is a very volatile, water-soluble, pale yellow, crystalline solid with a strong smell. Osmium powder has the characteristic smell of osmium tetroxide. Osmium tetroxide forms red osmates upon reaction with a base. With ammonia, it forms the nitrido-osmates . Osmium tetroxide boils at 130 °C and is a powerful oxidizing agent. By contrast, osmium dioxide () is black, non-volatile, and much less reactive and toxic. Only two osmium compounds have major applications: osmium tetroxide for staining tissue in electron microscopy and for the oxidation of alkenes in organic synthesis, and the non-volatile osmates for organic oxidation reactions. Osmium pentafluoride () is known, but osmium trifluoride () has not yet been synthesized. The lower oxidation states are stabilized by the larger halogens, so that the trichloride, tribromide, triiodide, and even diiodide are known. The oxidation state +1 is known only for osmium monoiodide (OsI), whereas several carbonyl complexes of osmium, such as triosmium dodecacarbonyl (), represent oxidation state 0. In general, the lower oxidation states of osmium are stabilized by ligands that are good σ-donors (such as amines) and π-acceptors (heterocycles containing nitrogen). The higher oxidation states are stabilized by strong σ- and π-donors, such as and . Despite its broad range of compounds in numerous oxidation states, osmium in bulk form at ordinary temperatures and pressures is stable in air. It resists attack by most acids and bases including aqua regia, but is attacked by and at high temperatures, and by hot concentrated nitric acid to produce . It can be dissolved by molten alkalis fused with an oxidizer such as sodium peroxide () or potassium chlorate () to give osmates such as . Isotopes Osmium has seven naturally occurring isotopes, five of which are stable: , , , , and (most abundant) . At least 37 artificial radioisotopes and 20 nuclear isomers exist, with mass numbers ranging from 160 to 203; the most stable of these is with a half-life of 6 years. undergoes alpha decay with such a long half-life years, approximately times the age of the universe, that for practical purposes it can be considered stable. is also known to undergo alpha decay with a half-life of years. Alpha decay is predicted for all the other naturally occurring isotopes, but this has never been observed, presumably due to very long half-lives. It is predicted that and can undergo double beta decay, but this radioactivity has not been observed yet. 189Os has a spin of 5/2 but 187Os has a nuclear spin 1/2. Its low natural abundance (1.64%) and low nuclear magnetic moment means that it is one of the most difficult natural abundance isotopes for NMR spectroscopy. is the descendant of (half-life ) and is used extensively in dating terrestrial as well as meteoric rocks (see Rhenium–osmium dating). It has also been used to measure the intensity of continental weathering over geologic time and to fix minimum ages for stabilization of the mantle roots of continental cratons. This decay is a reason why rhenium-rich minerals are abnormally rich in . However, the most notable application of osmium isotopes in geology has been in conjunction with the abundance of iridium, to characterise the layer of shocked quartz along the Cretaceous–Paleogene boundary that marks the extinction of the non-avian dinosaurs 65 million years ago. History Osmium was discovered in 1803 by Smithson Tennant and William Hyde Wollaston in London, England. The discovery of osmium is intertwined with that of platinum and the other metals of the platinum group. Platinum reached Europe as platina ("small silver"), first encountered in the late 17th century in silver mines around the Chocó Department, in Colombia. The discovery that this metal was not an alloy, but a distinct new element, was published in 1748. Chemists who studied platinum dissolved it in aqua regia (a mixture of hydrochloric and nitric acids) to create soluble salts. They always observed a small amount of a dark, insoluble residue. Joseph Louis Proust thought that the residue was graphite. Victor Collet-Descotils, Antoine François, comte de Fourcroy, and Louis Nicolas Vauquelin also observed iridium in the black platinum residue in 1803, but did not obtain enough material for further experiments. Later the two French chemists Fourcroy and Vauquelin identified a metal in a platinum residue they called ptène. In 1803, Smithson Tennant analyzed the insoluble residue and concluded that it must contain a new metal. Vauquelin treated the powder alternately with alkali and acids and obtained a volatile new oxide, which he believed was of this new metal—which he named ptene, from the Greek word (ptènos) for winged. However, Tennant, who had the advantage of a much larger amount of residue, continued his research and identified two previously undiscovered elements in the black residue, iridium and osmium. He obtained a yellow solution (probably of cis–[Os(OH)2O4]2−) by reactions with sodium hydroxide at red heat. After acidification he was able to distill the formed OsO4. He named it osmium after Greek osme meaning "a smell", because of the chlorine-like and slightly garlic-like smell of the volatile osmium tetroxide. Discovery of the new elements was documented in a letter to the Royal Society on June 21, 1804. Uranium and osmium were early successful catalysts in the Haber process, the nitrogen fixation reaction of nitrogen and hydrogen to produce ammonia, giving enough yield to make the process economically successful. At the time, a group at BASF led by Carl Bosch bought most of the world's supply of osmium to use as a catalyst. Shortly thereafter, in 1908, cheaper catalysts based on iron and iron oxides were introduced by the same group for the first pilot plants, removing the need for the expensive and rare osmium. Osmium is now obtained primarily from the processing of platinum and nickel ores. Occurrence Osmium is one of the least abundant stable elements in Earth's crust, with an average mass fraction of 50 parts per trillion in the continental crust. Osmium is found in nature as an uncombined element or in natural alloys; especially the iridium–osmium alloys, osmiridium (iridium rich), and iridosmium (osmium rich). In nickel and copper deposits, the platinum-group metals occur as sulfides (i.e., ), tellurides (e.g., ), antimonides (e.g., ), and arsenides (e.g., ); in all these compounds platinum is exchanged by a small amount of iridium and osmium. As with all of the platinum-group metals, osmium can be found naturally in alloys with nickel or copper. Within Earth's crust, osmium, like iridium, is found at highest concentrations in three types of geologic structure: igneous deposits (crustal intrusions from below), impact craters, and deposits reworked from one of the former structures. The largest known primary reserves are in the Bushveld Igneous Complex in South Africa, though the large copper–nickel deposits near Norilsk in Russia, and the Sudbury Basin in Canada are also significant sources of osmium. Smaller reserves can be found in the United States. The alluvial deposits used by pre-Columbian people in the Chocó Department, Colombia, are still a source for platinum-group metals. The second large alluvial deposit was found in the Ural Mountains, Russia, which is still mined. Production Osmium is obtained commercially as a by-product from nickel and copper mining and processing. During electrorefining of copper and nickel, noble metals such as silver, gold and the platinum-group metals, together with non-metallic elements such as selenium and tellurium, settle to the bottom of the cell as anode mud, which forms the starting material for their extraction. Separating the metals requires that they first be brought into solution. Several methods can achieve this, depending on the separation process and the composition of the mixture. Two representative methods are fusion with sodium peroxide followed by dissolution in aqua regia, and dissolution in a mixture of chlorine with hydrochloric acid. Osmium, ruthenium, rhodium, and iridium can be separated from platinum, gold, and base metals by their insolubility in aqua regia, leaving a solid residue. Rhodium can be separated from the residue by treatment with molten sodium bisulfate. The insoluble residue, containing ruthenium, osmium, and iridium, is treated with sodium oxide, in which Ir is insoluble, producing water-soluble ruthenium and osmium salts. After oxidation to the volatile oxides, is separated from by precipitation of (NH4)3RuCl6 with ammonium chloride. After it is dissolved, osmium is separated from the other platinum-group metals by distillation or extraction with organic solvents of the volatile osmium tetroxide. The first method is similar to the procedure used by Tennant and Wollaston. Both methods are suitable for industrial-scale production. In either case, the product is reduced using hydrogen, yielding the metal as a powder or sponge that can be treated using powder metallurgy techniques. Estimates of annual worldwide osmium production are on the order of several hundred to a few thousand kilograms. Production and consumption figures for osmium are not well reported because demand for the metal is limited and can be fulfilled with the byproducts of other refining processes. To reflect this, statistics often report osmium with other minor platinum group metals such as iridium and ruthenium. US imports of osmium from 2014 to 2021 averaged 155 kg annually. Applications Because osmium is virtually unforgeable when fully dense and very fragile when sintered, it is rarely used in its pure state, but is instead often alloyed with other metals for high-wear applications. Osmium alloys such as osmiridium are very hard and, along with other platinum-group metals, are used in the tips of fountain pens, instrument pivots, and electrical contacts, as they can resist wear from frequent operation. They were also used for the tips of phonograph styli during the late 78 rpm and early "LP" and "45" record era, circa 1945 to 1955. Osmium-alloy tips were significantly more durable than steel and chromium needle points, but wore out far more rapidly than competing, and costlier, sapphire and diamond tips, so they were discontinued. Osmium tetroxide has been used in fingerprint detection and in staining fatty tissue for optical and electron microscopy. As a strong oxidant, it cross-links lipids mainly by reacting with unsaturated carbon–carbon bonds and thereby both fixes biological membranes in place in tissue samples and simultaneously stains them. Because osmium atoms are extremely electron-dense, osmium staining greatly enhances image contrast in transmission electron microscopy (TEM) studies of biological materials. Those carbon materials otherwise have very weak TEM contrast. Another osmium compound, osmium ferricyanide (OsFeCN), exhibits similar fixing and staining action. The tetroxide and its derivative potassium osmate are important oxidants in organic synthesis. For the Sharpless asymmetric dihydroxylation, which uses osmate for the conversion of a double bond into a vicinal diol, Karl Barry Sharpless was awarded the Nobel Prize in Chemistry in 2001. OsO4 is very expensive for this use, so KMnO4 is often used instead, even though the yields are less for this cheaper chemical reagent. In 1898, the Austrian chemist Auer von Welsbach developed the Oslamp with a filament made of osmium, which he introduced commercially in 1902. After only a few years, osmium was replaced by tungsten, which is more abundant (and thus cheaper) and more stable. Tungsten has the highest melting point among all metals, and its use in light bulbs increases the luminous efficacy and life of incandescent lamps. The light bulb manufacturer Osram (founded in 1906, when three German companies, Auer-Gesellschaft, AEG and Siemens & Halske, combined their lamp production facilities) derived its name from the elements of osmium and Wolfram (the latter is German for tungsten). Like palladium, powdered osmium effectively absorbs hydrogen atoms. This could make osmium a potential candidate for a metal-hydride battery electrode. However, osmium is expensive and would react with potassium hydroxide, the most common battery electrolyte. Osmium has high reflectivity in the ultraviolet range of the electromagnetic spectrum; for example, at 600 Å osmium has a reflectivity twice that of gold. This high reflectivity is desirable in space-based UV spectrometers, which have reduced mirror sizes due to space limitations. Osmium-coated mirrors were flown in several space missions aboard the Space Shuttle, but it soon became clear that the oxygen radicals in low Earth orbit are abundant enough to significantly deteriorate the osmium layer. Precautions The primary hazard of metallic osmium is the potential formation of osmium tetroxide (OsO4), which is volatile and very poisonous. This reaction is thermodynamically favorable at room temperature, but the rate depends on temperature and the surface area of the metal. As a result, bulk material is not considered hazardous while powders react quickly enough that samples can sometimes smell like OsO4 if they are handled in air. Price Between 1990 and 2010, the nominal price of osmium metal was almost constant, while inflation reduced the real value from ~US$950/ounce to ~US$600/ounce. Because osmium has few commercial applications, it is not heavily traded and prices are seldom reported.
Physical sciences
Chemical elements_2
null
22305
https://en.wikipedia.org/wiki/Oxide
Oxide
An oxide () is a chemical compound containing at least one oxygen atom and one other element in its chemical formula. "Oxide" itself is the dianion (anion bearing a net charge of –2) of oxygen, an O2– ion with oxygen in the oxidation state of −2. Most of the Earth's crust consists of oxides. Even materials considered pure elements often develop an oxide coating. For example, aluminium foil develops a thin skin of (called a passivation layer) that protects the foil from further oxidation. Stoichiometry Oxides are extraordinarily diverse in terms of stoichiometries (the measurable relationship between reactants and chemical equations of an equation or reaction) and in terms of the structures of each stoichiometry. Most elements form oxides of more than one stoichiometry. A well known example is carbon monoxide and carbon dioxide. This applies to binary oxides, that is, compounds containing only oxide and another element. Far more common than binary oxides are oxides of more complex stoichiometries. Such complexity can arise by the introduction of other cations (a positively charged ion, i.e. one that would be attracted to the cathode in electrolysis) or other anions (a negatively charged ion). Iron silicate, Fe2SiO4, the mineral fayalite, is one of many examples of a ternary oxide. For many metal oxides, the possibilities of polymorphism and nonstoichiometry exist as well. The commercially important dioxides of titanium exists in three distinct structures, for example. Many metal oxides exist in various nonstoichiometric states. Many molecular oxides exist with diverse ligands as well. For simplicity sake, most of this article focuses on binary oxides. Formation Oxides are associated with all elements except a few noble gases. The pathways for the formation of this diverse family of compounds are correspondingly numerous. Metal oxides Many metal oxides arise by decomposition of other metal compounds, e.g. carbonates, hydroxides, and nitrates. In the making of calcium oxide, calcium carbonate (limestone) breaks down upon heating, releasing carbon dioxide: CaCO3 -> CaO + CO2 The reaction of elements with oxygen in air is a key step in corrosion relevant to the commercial use of iron especially. Almost all elements form oxides upon heating with oxygen atmosphere. For example, zinc powder will burn in air to give zinc oxide: 2 Zn + O2 -> 2 ZnO The production of metals from ores often involves the production of oxides by roasting (heating) metal sulfide minerals in air. In this way, (molybdenite) is converted to molybdenum trioxide, the precursor to virtually all molybdenum compounds: 2 MoS2 + 7 O2 -> 2MoO3 + 4 SO2 Noble metals (such as gold and platinum) are prized because they resist direct chemical combination with oxygen. NiS + 3/2 O2 -> NiO + SO2 Non-metal oxides Important and prevalent nonmetal oxides are carbon dioxide and carbon monoxide. These species form upon full or partial oxidation of carbon or hydrocarbons. With a deficiency of oxygen, the monoxide is produced: CH4 + 3/2 O2 -> CO + 2 H2O C + 1/2 O2 -> CO With excess oxygen, the dioxide is the product, the pathway proceeds by the intermediacy of carbon monoxide: CH4 + 2 O2 -> CO2 + 2 H2O C + O2 -> CO2 Elemental nitrogen () is difficult to convert to oxides, but the combustion of ammonia gives nitric oxide, which further reacts with oxygen: 4 NH3 + 5 O2 -> 4 NO + 6 H2O NO + 1/2 O2 -> NO2 These reactions are practiced in the production of nitric acid, a commodity chemical. The chemical produced on the largest scale industrially is sulfuric acid. It is produced by the oxidation of sulfur to sulfur dioxide, which is separately oxidized to sulfur trioxide: S + O2 -> SO2 SO2 + 1/2 O2 -> SO3 Finally the trioxide is converted to sulfuric acid by a hydration reaction: SO3 + H2O -> H2SO4 Structure Oxides have a range of structures, from individual molecules to polymeric and crystalline structures. At standard conditions, oxides may range from solids to gases. Solid oxides of metals usually have polymeric structures at ambient conditions. Molecular oxides Although most metal oxides are crystalline solids, many non-metal oxides are molecules. Examples of molecular oxides are carbon dioxide and carbon monoxide. All simple oxides of nitrogen are molecular, e.g., NO, N2O, NO2 and N2O4. Phosphorus pentoxide is a more complex molecular oxide with a deceptive name, the real formula being P4O10. Tetroxides are rare, with a few more common examples being ruthenium tetroxide, osmium tetroxide, and xenon tetroxide. Reactions Reduction Reduction of metal oxide to the metal is practiced on a large scale in the production of some metals. Many metal oxides convert to metals simply by heating, (see Thermal decomposition). For example, silver oxide decomposes at 200 °C: 2 Ag2O -> 4 Ag + O2 Most often, however, metals oxides are reduced by a chemical reagent. A common and cheap reducing agent is carbon in the form of coke. The most prominent example is that of iron ore smelting. Many reactions are involved, but the simplified equation is usually shown as: 2 Fe2O3 + 3 C -> 4 Fe + 3 CO2 Some metal oxides dissolve in the presence of reducing agents, which can include organic compounds. Reductive dissolution of ferric oxides is integral to geochemical phenomena such as the iron cycle. Hydrolysis and dissolution Because the M-O bonds are typically strong, metal oxides tend to be insoluble in solvents, though they may be attacked by aqueous acids and bases. Dissolution of oxides often gives oxyanions. Adding aqueous base to gives various phosphates. Adding aqueous base to gives polyoxometalates. Oxycations are rarer, some examples being nitrosonium (), vanadyl (), and uranyl (). Of course many compounds are known with both oxides and other groups. In organic chemistry, these include ketones and many related carbonyl compounds. For the transition metals, many oxo complexes are known as well as oxyhalides. Nomenclature and formulas The chemical formulas of the oxides of the chemical elements in their highest oxidation state are predictable and are derived from the number of valence electrons for that element. Even the chemical formula of O4, tetraoxygen, is predictable as a group 16 element. One exception is copper, for which the highest oxidation state oxide is copper(II) oxide and not copper(I) oxide. Another exception is fluoride, which does not exist as one might expect—as F2O7—but as OF2.
Physical sciences
Inorganic compounds
null
22332
https://en.wikipedia.org/wiki/October
October
October is the tenth month of the year in the Julian and Gregorian calendars. Its length is 31 days. The eighth month in the old calendar of Romulus , October retained its name (from Latin and Greek ôctō meaning "eight") after January and February were inserted into the calendar that had originally been created by the Romans. In Ancient Rome, one of three Mundus patet would take place on October 5, Meditrinalia October 11, Augustalia on October 12, October Horse on October 15, and Armilustrium on October 19. These dates do not correspond to the modern Gregorian calendar. Among the Anglo-Saxons, it was known as Winterfylleth (Ƿinterfylleþ), because at this full moon, winter was supposed to begin. October is commonly associated with the season of autumn in parts of the Northern Hemisphere, and spring in parts of the Southern Hemisphere, where it is the seasonal equivalent to April in the Northern Hemisphere and vice versa. Symbols October's birthstones are the tourmaline and opal. Its birth flower is the calendula. The zodiac signs are Libra (until October 22) and Scorpio (from October 23 onward). Observances This list does not necessarily imply either official status or general observance. Non-Gregorian: dates (All Baha'i, Islamic, and Jewish observances begin at the sundown prior to the date listed, and end at sundown of the date in question unless otherwise noted.) List of observances set by the Bahá'í calendar List of observances set by the Chinese calendar List of observances set by the Hebrew calendar List of observances set by the Islamic calendar List of observances set by the Solar Hijri calendar Month-long Black History Month in the United Kingdom In Catholic Church tradition, October is the Month of the Holy Rosary. Breast Cancer Awareness Month Health Literacy Month International Walk to School Month Medical Ultrasound Awareness Month Rett Syndrome Awareness Month World Blindness Awareness Month World Menopause Month Vegetarian Awareness Month United States The last two to three weeks in October (and, occasionally, the first week of November) are normally the only time of the year during which all of the "Big Four" major professional sports leagues in the U.S. and Canada schedule games; the National Basketball Association begins its preseason and about two weeks later starts the regular season, the National Hockey League is in the first month of its regular season, the National Football League is about halfway through its regular season, and Major League Baseball is in its postseason with the League Championship Series and World Series. Days on which all four leagues play are colloquially known as a sports equinox. American Archives Month National Adopt a Shelter Dog Month National Arts & Humanities Month National Bullying Prevention Month National Cyber Security Awareness Month National Domestic Violence Awareness Month Filipino American History Month Italian-American Heritage and Culture Month Polish American Heritage Month National Work and Family Month United States, health-related American Pharmacist Month Celebrating All of October Dwarfism/Little People/Short Stature/Skeletal Dysplasia Awareness Dwarfism/Little People Awareness Month Eczema Awareness Month National Dental Hygiene Month National Healthy Lung Month National Infertility Awareness Month Liver Awareness Month National Lupus Erythematosus Awareness Month National Physical Therapy Month National Spina Bifida Awareness Month Sudden Infant Death Syndrome Awareness Month (United States) United States, culinary National Pizza Month National Popcorn Poppin' Month National Pork Month National Seafood Month Movable dates Oktoberfest celebrations (varies globally based on area) Astronomy Day: October 1 World Cerebral Palsy Day: October 6 World College Radio Day: October 7 Earth Science Week: October 9–15
Technology
Months
null
22350
https://en.wikipedia.org/wiki/Osteichthyes
Osteichthyes
Osteichthyes ( ; ), also known as osteichthyans or commonly referred to as the bony fish, is a diverse superclass of vertebrate animals that have endoskeletons primarily composed of bone tissue. They can be contrasted with the Chondrichthyes (cartilaginous fish) and the extinct placoderms and acanthodians, which have endoskeletons primarily composed of cartilage. The vast majority of extant fish are members of Osteichthyes, being an extremely diverse and abundant group consisting of 45 orders, over 435 families and 28,000 species. It is the largest class of vertebrates in existence today, encompassing most aquatic vertebrates, as well as all semi-aquatic and terrestrial vertebrates. The group is divided into two main clades, the ray-finned fish (Actinopterygii, which makes up the vast majority of extant fish) and the lobe-finned fish (Sarcopterygii, which gave rise to all land vertebrates, i.e. tetrapods). The oldest known fossils of bony fish are about 425 million years old from the late Silurian, which are also transitional fossils showing a tooth pattern that is in between the tooth rows of sharks and true bony fishes. Despite the name, these early basal bony fish had not yet evolved ossification and their skeletons were still mostly cartilaginous, and the main distinguishing feature that set them apart from other fish clades were the development of foregut pouches that eventually evolved into the swim bladders and lungs, respectively. Osteichthyes can be compared to Euteleostomi. In paleontology the terms are synonymous. In ichthyology the difference is that Euteleostomi presents a cladistic view which includes the terrestrial tetrapods that evolved from lobe-finned fish. Until recently, the view of most ichthyologists has been that Osteichthyes were paraphyletic and include only fishes. However, since 2013 widely cited ichthyology papers have been published with phylogenetic trees that treat the Osteichthyes as a clade including tetrapods. Characteristics Bony fish are characterized by a relatively stable pattern of cranial bones, rooted, medial insertion of mandibular muscle in the lower jaw. The head and pectoral girdles are covered with large dermal bones. The eyeball is supported by a sclerotic ring of four small bones, but this characteristic has been lost or modified in many modern species. The labyrinth in the inner ear contains large otoliths. The braincase, or neurocranium, is frequently divided into anterior and posterior sections divided by a fissure. Early bony fish had simple respiratory diverticula (an outpouching on either side of the esophagus) which helped them breathe air in low-oxygen water as a form of supplementary enteral respiration. In ray-finned fish these have evolved into swim bladders, the changing sizes of which help to alter the body's specific density and buoyancy. In elpistostegalians, a crown group of lobe-finned fish that gave rise to the land-dwelling tetrapods, these respiratory diverticula became further specialized for obligated air breathing and evolved into the modern amphibian, reptilian, avian and mammalian lungs. Early bony fish did not have fin spines like most modern fish, but instead had the fleshy paddle-like fins similar to other non-bony clades of fish, although the lobe-finned fish evolved articulated appendicular skeletons within their paired fins, which gave rise to tetrapods' limbs. They also evolved a pair of opercula (gill covers), which can actively draw water across the gills so they can breathe without having to swim. Bony fish do not have placoid scales like cartilaginous fish, instead they consist of three types of scales that do not penetrate the epidermis in the process. The three categories of scales for Osteichthyes which are cosmoid scales, ganoid scales, teleost scales. The teleost scales are also then divided into two subgroups which are the cycloid scales, and the ctenoid scales. All these scales have a base of bone that they all originate from, the only difference is that the teleost scales only have one layer of bone. Ganoid scales have lamellar bone, and vascular bone that lies on top of the lamellar bone, then enamel lies on top of both layers of bone. Cosmoid scales have the same two layers of bone that ganoid scales have except they have dentin in-between the enamel and vascular bone and lamellar (vascular and lamellar two subcategories for bone found in scales). All these scales are found underneath the epidermis and do not break the epidermis of the fish. Unlike the placoid scales that poke through the epidermis of the fish. Classification Traditionally, Osteichthyes was considered a class, recognised on the presence of a swim bladder, only three pairs of gill arches hidden behind a bony operculum, and a predominantly bony skeleton. Under this classification system, Osteichthyes was considered paraphyletic with regard to land vertebrates, as the common ancestor of all osteichthyans includes tetrapods amongst its descendants. While the largest subclass, Actinopterygii (ray-finned fish), is monophyletic, with the inclusion of the smaller sub-class Sarcopterygii, Osteichthyes was regarded as paraphyletic. This has led to the current cladistic classification which splits the Osteichthyes into two full classes. Under this scheme Osteichthyes is monophyletic, as it includes the tetrapods making it a synonym of the clade Euteleostomi. Most bony fish belong to the ray-finned fish (Actinopterygii). Phylogeny A phylogeny of living Osteichthyes, including the tetrapods, is shown in the cladogram below. Whole-genome duplication took place in the ancestral Osteichthyes. Biology All bony fish possess gills. For the majority this is their sole or main means of respiration. Lungfish and other osteichthyan species are capable of respiration through lungs or vascularized swim bladders. Other species can respire through their skin, intestines, and/or stomach. Osteichthyes are primitively ectothermic (cold blooded), meaning that their body temperature is dependent on that of the water. But some of the larger marine osteichthyids, such as the opah, swordfish and tuna have independently evolved various levels of endothermy. Bony fish can be any type of heterotroph: numerous species of omnivore, carnivore, herbivore, filter-feeder, detritivore, or hematophage are documented. Some bony fish are hermaphrodites, and a number of species exhibit parthenogenesis. Fertilization is usually external, but can be internal. Development is usually oviparous (egg-laying) but can be ovoviviparous, or viviparous. Although there is usually no parental care after birth, before birth parents may scatter, hide, guard or brood eggs, with sea horses being notable in that the males undergo a form of "pregnancy", brooding eggs deposited in a ventral pouch by a female. Examples The giant sunfish is the heaviest bony fish in the world, in late 2021, Portuguese fishermen found a dead sunfish near the coast of Faial Island, Azores, with a weight of and tall and long established the biggest giant sunfish ever captured. The longest is the king of herrings, a type of oarfish. Other very large bony fish include the Atlantic blue marlin, some specimens of which have been recorded as in excess of , the black marlin, some sturgeon species, and the giant and goliath grouper, which both can exceed in weight. In contrast, Paedocypris progenetica and the stout infantfish can measure less than . The beluga sturgeon is the largest species of freshwater bony fish extant today, and Arapaima gigas is among the largest of the freshwater fish. The largest bony fish ever was Leedsichthys, which dwarfed the beluga sturgeon as well as the ocean sunfish, giant grouper and all the other giant bony fishes alive today. Comparison with cartilaginous fishes
Biology and health sciences
Fishes
null
22353
https://en.wikipedia.org/wiki/Orrorin
Orrorin
Orrorin is an extinct genus of primate within Homininae from the Miocene Lukeino Formation and Pliocene Mabaget Formation, both of Kenya. The type species is O. tugenenesis, named in 2001, and a second species, O. praegens, assigned to the genus in 2022. Discovery and naming Ororrin tugenensis The first part of the holotype, a lower molar, was discovered by Martin Pickford in 1974 and described by Pickford (1975). The team that found the rest of the holotype of O. tugenensis was led by Brigitte Senut and Martin Pickford from the French National Museum of Natural History. Starting from 17 October 2000, 20 fossils were found at four sites in the Lukeino Formation, Kenya: of these, the fossils at Cheboit and Aragai are the oldest (), while those in Kapsomin and Kapcheberek are found in the upper levels of the formation (). Orrorin tugenensis was named and described by Senut et al. (2001). Orrorin praegens The second species, O. praegens, was first described by Ward (1985) and Ward & Hill (1988), and was initially described as Homo antiquus praegens by Ferguson (1989) based on specimen KNM-TH 13150, a mandible discovered in the Pliocene Mabaget Formation of Kenya during the early 1980s. The mandible is known as the Tabarin mandible, which was previously classified within Ardipithecus ramidus (or cf. A. cf. ramidus), "Ardipithecus" praegens or "Praeanthropus" praegens. Several referred remains of O. praegens were collected between 2005 and 2011 by the Franco-Kenyan Kenya Palaeontology Expedition and they, alongside the Tabarin mandible, were classified by Pickford et al. (2022) as being separate from Homo, so they were classified within Orrorin as O. praegens. Etymology The name of genus Orrorin (plural Orroriek) means "original man" in Tugen, and the epithet of O. tugenensis derives from Tugen Hills in Kenya, where the first fossil was found in 2000. The epithet of O. praegens means roughly “group of people who came before.” Fossils The 20 specimens belonging to O. tugenensis are believed to be from at least five individuals. They include: the posterior part of a mandible in two pieces; a symphysis and several isolated teeth; three fragments of femora; a partial humerus; a proximal phalanx; and a distal thumb phalanx. Orrorin had small teeth relative to its body size. Its dentition differs from that found in Australopithecus in that its cheek teeth are smaller and less elongated mesiodistally and from Ardipithecus in that its enamel is thicker. The dentition differs from both these species in the presence of a mesial groove on the upper canines. The canines are ape-like but reduced, like those found in Miocene apes and female chimpanzees. Orrorin had small post-canines and was microdont, like modern humans, whereas australopithecines were megadont. However, some researchers have denied that this is compelling evidence that Orrorin was more closely related to modern humans than australopithecines as early members of the genus Homo, who were almost certainly the direct ancestors of modern humans, were also megadonts. In the femur, the head is spherical and rotated anteriorly; the neck is elongated and oval in section and the lesser trochanter protrudes medially. While these suggest that Orrorin was bipedal, the rest of the postcranium indicates it climbed trees. While the proximal phalanx is curved, the distal pollical phalanx is of human proportions and has thus been associated with toolmaking, but should probably be associated with grasping abilities useful for tree-climbing in this context. After the fossils were found in 2000, they were held at the Kipsaraman village community museum, but the museum was subsequently closed. Since then, according to the Community Museums of Kenya chairman Eustace Kitonga, the fossils are stored at a secret bank vault in Nairobi. Classification If Orrorin proves to be a direct human ancestor, then according to some paleoanthropologists, australopithecines such as Australopithecus afarensis ("Lucy") may be considered a side branch of the hominid family tree: Orrorin is both earlier, by almost 3 million years, and more similar to modern humans than is A. afarensis. The main similarity is that the Orrorin femur is morphologically closer to that of Homo sapiens than is Lucy's; there is, however, some debate over this point. This debate is largely centered around the fact that Lucy was female and the Orrorin femur it has been compared to belonged to a male. Another point of view cites comparisons between Orrorin and other Miocene apes, rather than extant great apes, which shows instead that the femur shows itself as an intermediate between that of Australopiths and said earlier apes. Other fossils (leaves and many mammals) found in the Lukeino Formation show that Orrorin lived in a dry evergreen forest environment, not the savanna assumed by many theories of human evolution. Evolution of bipedalism The fossils of Orrorin tugenensis share no derived features of hominoid great-ape relatives. In contrast, "Orrorin shares several apomorphic features with modern humans, as well as some with australopithecines, including the presence of an obturator externus groove, elongated femoral neck, anteriorly twisted head (posterior twist in Australopithecus), anteroposteriorly compressed femoral neck, asymmetric distribution of cortex in the femoral neck, shallow superior notch, and a well developed gluteal tuberosity which coalesces vertically with the crest that descends the femoral shaft posteriorly." It does, however, also share many of such properties with several Miocene ape species, even showing some transitional elements between basal apes like the Aegyptopithecus and Australopithecus. According to recent studies Orrorin tugenensis is a basal hominid that adapted an early form of bipedalism. Based on the structure of its femoral head it still exhibited some arboreal properties, likely to forage and build shelters. The length of the femoral neck in Orrorin tugenensis fossils is elongated and is similar in shape and length to modern humans and Australopithicines. While it was originally claimed that its femoral head is larger in comparison to Australopithicines and is much closer in shape and relative size to Homo sapiens, this claim has been challenged by some researchers who have noted that the femoral heads of male australopithicines are more akin to those of Orrorin, and by extension modern humans, than those of female australopithicines. Proponents of the notion that Orrorin is more closely related to humans than Lucy is have addressed this by asserting that the male australopithicine femurs in question in fact belong to a different species than Lucy. O. tugenensis appears to have developed bipedalism 6 million years ago. O. tugenensis shares an early hominin feature in which their iliac blade is flared to help counter the torque of their body weight; this shows that they adapted bipedalism around 6 MYA. These features are shared with many species of Australopithecus. It has been suggested by Pickford that the many features Orrorin shares with modern humans show that it is more closely related to Homo sapiens than to Australopithecus. This would mean that Australopithecus would represent a side branch in the homin evolution that does not directly lead to Homo. However the femora morphology of O. tugenensis shares many similarities with Australopithicine femora morphology, which weakens this claim. Another study conducted by Almecija suggested that Orrorin is more closely related to early hominins than to Homo. An analysis of the BAR 10020' 00 femur showed that Orrorin is an intermediate between Pan and Australopithecus afarensis. The current prevailing theory is that Orrorin tugenensis is a basal hominin and that bipedalism developed early in the hominin clade and successfully evolved down the human evolutionary tree. While the phylogeny of Orrorin is uncertain, the evidence of the evolution of bipedalism is an invaluable discovery from this early fossil hominin. A recent phylogenetic analysis also recovered Orrorin as a hominin.
Biology and health sciences
Australopithecines
Biology
22362
https://en.wikipedia.org/wiki/Ordered%20pair
Ordered pair
In mathematics, an ordered pair, denoted (a, b), is a pair of objects in which their order is significant. The ordered pair (a, b) is different from the ordered pair (b, a), unless a = b. In contrast, the unordered pair, denoted {a, b}, always equals the unordered pair {b, a}. Ordered pairs are also called 2-tuples, or sequences (sometimes, lists in a computer science context) of length 2. Ordered pairs of scalars are sometimes called 2-dimensional vectors. (Technically, this is an abuse of terminology since an ordered pair need not be an element of a vector space.) The entries of an ordered pair can be other ordered pairs, enabling the recursive definition of ordered n-tuples (ordered lists of n objects). For example, the ordered triple (a,b,c) can be defined as (a, (b,c)), i.e., as one pair nested in another. In the ordered pair (a, b), the object a is called the first entry, and the object b the second entry of the pair. Alternatively, the objects are called the first and second components, the first and second coordinates, or the left and right projections of the ordered pair. Cartesian products and binary relations (and hence functions) are defined in terms of ordered pairs, cf. picture. Generalities Let and be ordered pairs. Then the characteristic (or defining) property of the ordered pair is: The set of all ordered pairs whose first entry is in some set A and whose second entry is in some set B is called the Cartesian product of A and B, and written A × B. A binary relation between sets A and B is a subset of A × B. The notation may be used for other purposes, most notably as denoting open intervals on the real number line. In such situations, the context will usually make it clear which meaning is intended. For additional clarification, the ordered pair may be denoted by the variant notation , but this notation also has other uses. The left and right of a pair p is usually denoted by 1(p) and 2(p), or by ℓ(p) and r(p), respectively. In contexts where arbitrary n-tuples are considered, (t) is a common notation for the i-th component of an n-tuple t. Informal and formal definitions In some introductory mathematics textbooks an informal (or intuitive) definition of ordered pair is given, such as For any two objects and , the ordered pair is a notation specifying the two objects and , in that order. This is usually followed by a comparison to a set of two elements; pointing out that in a set and must be different, but in an ordered pair they may be equal and that while the order of listing the elements of a set doesn't matter, in an ordered pair changing the order of distinct entries changes the ordered pair. This "definition" is unsatisfactory because it is only descriptive and is based on an intuitive understanding of order. However, as is sometimes pointed out, no harm will come from relying on this description and almost everyone thinks of ordered pairs in this manner. A more satisfactory approach is to observe that the characteristic property of ordered pairs given above is all that is required to understand the role of ordered pairs in mathematics. Hence the ordered pair can be taken as a primitive notion, whose associated axiom is the characteristic property. This was the approach taken by the N. Bourbaki group in its Theory of Sets, published in 1954. However, this approach also has its drawbacks as both the existence of ordered pairs and their characteristic property must be axiomatically assumed. Another way to rigorously deal with ordered pairs is to define them formally in the context of set theory. This can be done in several ways and has the advantage that existence and the characteristic property can be proven from the axioms that define the set theory. One of the most cited versions of this definition is due to Kuratowski (see below) and his definition was used in the second edition of Bourbaki's Theory of Sets, published in 1970. Even those mathematical textbooks that give an informal definition of ordered pairs will often mention the formal definition of Kuratowski in an exercise. Defining the ordered pair using set theory If one agrees that set theory is an appealing foundation of mathematics, then all mathematical objects must be defined as sets of some sort. Hence if the ordered pair is not taken as primitive, it must be defined as a set. Several set-theoretic definitions of the ordered pair are given below (see also Diepert). Wiener's definition Norbert Wiener proposed the first set theoretical definition of the ordered pair in 1914: He observed that this definition made it possible to define the types of Principia Mathematica as sets. Principia Mathematica had taken types, and hence relations of all arities, as primitive. Wiener used {{b}} instead of {b} to make the definition compatible with type theory where all elements in a class must be of the same "type". With b nested within an additional set, its type is equal to 's. Hausdorff's definition About the same time as Wiener (1914), Felix Hausdorff proposed his definition: "where 1 and 2 are two distinct objects different from a and b." Kuratowski's definition In 1921 Kazimierz Kuratowski offered the now-accepted definition of the ordered pair (a, b): When the first and the second coordinates are identical, the definition obtains: Given some ordered pair p, the property "x is the first coordinate of p" can be formulated as: The property "x is the second coordinate of p" can be formulated as: In the case that the left and right coordinates are identical, the right conjunct is trivially true, since is the case. If then: This is how we can extract the first coordinate of a pair (using the iterated-operation notation for arbitrary intersection and arbitrary union): This is how the second coordinate can be extracted: (if , then the set could be obtained more simply: , but the previous formula also takes into account the case when .) Note that and are generalized functions, in the sense that their domains and codomains are proper classes. Variants The above Kuratowski definition of the ordered pair is "adequate" in that it satisfies the characteristic property that an ordered pair must satisfy, namely that . In particular, it adequately expresses 'order', in that is false unless . There are other definitions, of similar or lesser complexity, that are equally adequate: The reverse definition is merely a trivial variant of the Kuratowski definition, and as such is of no independent interest. The definition short is so-called because it requires two rather than three pairs of braces. Proving that short satisfies the characteristic property requires the Zermelo–Fraenkel set theory axiom of regularity. Moreover, if one uses von Neumann's set-theoretic construction of the natural numbers, then 2 is defined as the set {0, 1} = {0, {0}}, which is indistinguishable from the pair (0, 0)short. Yet another disadvantage of the short pair is the fact that, even if a and b are of the same type, the elements of the short pair are not. (However, if a = b then the short version keeps having cardinality 2, which is something one might expect of any "pair", including any "ordered pair".) Proving that definitions satisfy the characteristic property Prove: (a, b) = (c, d) if and only if a = c and b = d. Kuratowski: If. If a = c and b = d, then {{a}, {a, b}} = {{c}, {c, d}}. Thus (a, b)K = (c, d)K. Only if. Two cases: a = b, and a ≠ b. If a = b: (a, b)K = {{a}, {a, b}} = {{a}, {a, a}} = {{a}}. {{c}, {c, d}} = (c, d)K = (a, b)K = {{a}}. Thus {c} = {c, d} = {a}, which implies a = c and a = d. By hypothesis, a = b. Hence b = d. If a ≠ b, then (a, b)K = (c, d)K implies {{a}, {a, b}} = {{c}, {c, d}}. Suppose {c, d} = {a}. Then c = d = a, and so {{c}, {c, d}} = {{a}, {a, a}} = {{a}, {a}} = {{a}}. But then {{a}, {a, b}} would also equal {{a}}, so that b = a which contradicts a ≠ b. Suppose {c} = {a, b}. Then a = b = c, which also contradicts a ≠ b. Therefore {c} = {a}, so that c = a and {c, d} = {a, b}. If d = a were true, then {c, d} = {a, a} = {a} ≠ {a, b}, a contradiction. Thus d = b is the case, so that a = c and b = d. Reverse: (a, b)reverse = {{b}, {a, b}} = {{b}, {b, a}} = (b, a)K. If. If (a, b)reverse = (c, d)reverse, (b, a)K = (d, c)K. Therefore, b = d and a = c. Only if. If a = c and b = d, then {{b}, {a, b}} = {{d}, {c, d}}. Thus (a, b)reverse = (c, d)reverse. Short: If: If a = c and b = d, then {a, {a, b}} = {c, {c, d}}. Thus (a, b)short = (c, d)short. Only if: Suppose {a, {a, b}} = {c, {c, d}}. Then a is in the left hand side, and thus in the right hand side. Because equal sets have equal elements, one of a = c or a = {c, d} must be the case. If a = {c, d}, then by similar reasoning as above, {a, b} is in the right hand side, so {a, b} = c or {a, b} = {c, d}. If {a, b} = c then c is in {c, d} = a and a is in c, and this combination contradicts the axiom of regularity, as {a, c} has no minimal element under the relation "element of." If {a, b} = {c, d}, then a is an element of a, from a = {c, d} = {a, b}, again contradicting regularity. Hence a = c must hold. Again, we see that {a, b} = c or {a, b} = {c, d}. The option {a, b} = c and a = c implies that c is an element of c, contradicting regularity. So we have a = c and {a, b} = {c, d}, and so: {b} = {a, b} \ {a} = {c, d} \ {c} = {d}, so b = d. Quine–Rosser definition Rosser (1953) employed a definition of the ordered pair due to Quine which requires a prior definition of the natural numbers. Let be the set of natural numbers and define first The function increments its argument if it is a natural number and leaves it as is otherwise; the number 0 does not appear in the range of . As is the set of the elements of not in go on with This is the set image of a set under , sometimes denoted by as well. Applying function to a set x simply increments every natural number in it. In particular, never contains contain the number 0, so that for any sets x and y, Further, define By this, does always contain the number 0. Finally, define the ordered pair (A, B) as the disjoint union (which is in alternate notation). Extracting all the elements of the pair that do not contain 0 and undoing yields A. Likewise, B can be recovered from the elements of the pair that do contain 0. For example, the pair is encoded as provided . In type theory and in outgrowths thereof such as the axiomatic set theory NF, the Quine–Rosser pair has the same type as its projections and hence is termed a "type-level" ordered pair. Hence this definition has the advantage of enabling a function, defined as a set of ordered pairs, to have a type only 1 higher than the type of its arguments. This definition works only if the set of natural numbers is infinite. This is the case in NF, but not in type theory or in NFU. J. Barkley Rosser showed that the existence of such a type-level ordered pair (or even a "type-raising by 1" ordered pair) implies the axiom of infinity. For an extensive discussion of the ordered pair in the context of Quinian set theories, see Holmes (1998). Cantor–Frege definition Early in the development of the set theory, before paradoxes were discovered, Cantor followed Frege by defining the ordered pair of two sets as the class of all relations that hold between these sets, assuming that the notion of relation is primitive: This definition is inadmissible in most modern formalized set theories and is methodologically similar to defining the cardinal of a set as the class of all sets equipotent with the given set. Morse definition Morse–Kelley set theory makes free use of proper classes. Morse defined the ordered pair so that its projections could be proper classes as well as sets. (The Kuratowski definition does not allow this.) He first defined ordered pairs whose projections are sets in Kuratowski's manner. He then redefined the pair where the component Cartesian products are Kuratowski pairs of sets and where This renders possible pairs whose projections are proper classes. The Quine–Rosser definition above also admits proper classes as projections. Similarly the triple is defined as a 3-tuple as follows: The use of the singleton set which has an inserted empty set allows tuples to have the uniqueness property that if a is an n-tuple and b is an m-tuple and a = b then n = m. Ordered triples which are defined as ordered pairs do not have this property with respect to ordered pairs. Axiomatic definition Ordered pairs can also be introduced in Zermelo–Fraenkel set theory (ZF) axiomatically by just adding to ZF a new function symbol of arity 2 (it is usually omitted) and a defining axiom for : This definition is acceptable because this extension of ZF is a conservative extension. The definition helps to avoid so called accidental theorems like (a,a) = {{a}}, and {a} ∈ (a,b), if Kuratowski's definition (a,b) = {{a}, {a,b}} was used. Category theory A category-theoretic product A × B in a category of sets represents the set of ordered pairs, with the first element coming from A and the second coming from B. In this context the characteristic property above is a consequence of the universal property of the product and the fact that elements of a set X can be identified with morphisms from 1 (a one element set) to X. While different objects may have the universal property, they are all naturally isomorphic.
Mathematics
Set theory
null
22373
https://en.wikipedia.org/wiki/Object%20code
Object code
In computing, object code or object module is the product of an assembler or compiler. In a general sense, object code is a sequence of statements or instructions in a computer language, usually a machine code language (i.e., binary) or an intermediate language such as register transfer language (RTL). The term indicates that the code is the goal or result of the compiling process, with some early sources referring to source code as a "subject program". Details Object files can in turn be linked to form an executable file or library file. In order to be used, object code must either be placed in an executable file, a library file, or an object file. Object code is a portion of machine code that has not yet been linked into a complete program. It is the machine code for one particular library or module that will make up the completed product. It may also contain placeholders or offsets, not found in the machine code of a completed program, that the linker will use to connect everything together. Whereas machine code is binary code that can be executed directly by the CPU, object code has the jumps and inter-module references partially parametrized so that a linker can fill them in. An object file is assumed to begin at a specific location in memory, often zero. It contains information on instructions that reference memory, so that the linker can relocate the code when combining multiple object files into a single program. An assembler is used to convert assembly code into machine code (object code). A linker links several object (and library) files to generate an executable. Assemblers (and some compilers) can also assemble directly to machine code to produce executable files without the object intermediary step.
Technology
Software development: General
null
22385
https://en.wikipedia.org/wiki/Oort%20cloud
Oort cloud
The Oort cloud (), sometimes called the Öpik–Oort cloud, is theorized to be a vast cloud of icy planetesimals surrounding the Sun at distances ranging from 2,000 to 200,000 AU (0.03 to 3.2 light-years). The concept of such a cloud was proposed in 1950 by the Dutch astronomer Jan Oort, in whose honor the idea was named. Oort proposed that the bodies in this cloud replenish and keep constant the number of long-period comets entering the inner Solar System—where they are eventually consumed and destroyed during close approaches to the Sun. The cloud is thought to encompass two regions: a disc-shaped inner Oort cloud aligned with the solar ecliptic (also called its Hills cloud) and a spherical outer Oort cloud enclosing the entire Solar System. Both regions lie well beyond the heliosphere and are in interstellar space. The innermost portion of the Oort cloud is more than a thousand times as distant from the Sun as the Kuiper belt, the scattered disc and the detached objects—three nearer reservoirs of trans-Neptunian objects. The outer limit of the Oort cloud defines the cosmographic boundary of the Solar System. This area is defined by the Sun's Hill sphere, and hence lies at the interface between solar and galactic gravitational dominion. The outer Oort cloud is only loosely bound to the Solar System and its constituents are easily affected by the gravitational pulls of both passing stars and the Milky Way itself. These forces served to moderate and render more circular the highly eccentric orbits of material ejected from the inner Solar System during its early phases of development. The circular orbits of material in the Oort disc are largely thanks to this galactic gravitational torquing. By the same token, galactic interference in the motion of Oort bodies occasionally dislodges comets from their orbits within the cloud, sending them into the inner Solar System. Based on their orbits, most but not all of the short-period comets appear to have come from the Oort disc. Other short-period comets may have originated from the far larger spherical cloud. Astronomers hypothesize that the material presently in the Oort cloud formed much closer to the Sun, in the protoplanetary disc, and was then scattered far into space through the gravitational influence of the giant planets. No direct observation of the Oort cloud is possible with present imaging technology. Nevertheless, the cloud is thought to be the source that replenishes most long-period and Halley-type comets, which are eventually consumed by their close approaches to the Sun after entering the inner Solar System. The cloud may also serve the same function for many of the centaurs and Jupiter-family comets. Development of theory By the turn of the 20th century, it was understood that there were two main classes of comet: short-period comets (also called ecliptic comets) and long-period comets (also called nearly isotropic comets). Ecliptic comets have relatively small orbits aligned near the ecliptic plane and are not found much farther than the Kuiper cliff around 50 AU from the Sun (the orbit of Neptune averages about 30 AU and 177P/Barnard has aphelion around 48 AU). Long-period comets, on the other hand, travel in very large orbits thousands of AU from the Sun and are isotropically distributed. This means long-period comets appear from every direction in the sky, both above and below the ecliptic plane. The origin of these comets was not well understood, and many long-period comets were initially assumed to be on parabolic trajectories, making them one-time visitors to the Sun from interstellar space. In 1907, Armin Otto Leuschner suggested that many of the comets then thought to have parabolic orbits in fact moved along extremely large elliptical orbits that would return them to the inner Solar System after long intervals during which they were invisible to Earth-based astronomy. In 1932, the Estonian astronomer Ernst Öpik proposed a reservoir of long-period comets in the form of an orbiting cloud at the outermost edge of the Solar System. Dutch astronomer Jan Oort revived this basic idea in 1950 to resolve a paradox about the origin of comets. The following facts are not easily reconcilable with the highly elliptical orbits in which long-period comets are always found: Over millions and billions of years the orbits of Oort cloud comets are unstable. Celestial dynamics will eventually dictate that a comet must be pulled away by a passing star, collide with the Sun or a planet, or be ejected from the Solar System through planetary perturbations. Moreover, the volatile composition of comets means that as they repeatedly approach the Sun radiation gradually boils the volatiles off until the comet splits or develops an insulating crust that prevents further outgassing. Oort reasoned that comets with orbits that closely approach the Sun cannot have been doing so since the condensation of the protoplanetary disc, more than 4.5 billion years ago. Hence long-period comets could not have formed in the current orbits in which they are always discovered and must have been held in an outer reservoir for nearly all of their existence. Oort also studied tables of ephemerides for long-period comets and discovered that there is a curious concentration of long-period comets whose farthest retreat from the Sun (their aphelia) cluster around 20,000 AU. This suggested a reservoir at that distance with a spherical, isotropic distribution. He also proposed that the relatively rare comets with orbits of about 10,000 AU probably went through one or more orbits into the inner Solar System and there had their orbits drawn inward by the gravity of the planets. Structure and composition The Oort cloud is thought to occupy a vast space somewhere between from the Sun to as far out as or even . The region can be subdivided into a spherical outer Oort cloud with a radius of some and a torus-shaped inner Oort cloud with a radius of . The inner Oort cloud is sometimes known as the Hills cloud, named for Jack G. Hills, who proposed its existence in 1981. Models predict the inner cloud to be the much denser of the two, having tens or hundreds of times as many cometary nuclei as the outer cloud. The Hills cloud is thought to be necessary to explain the continued existence of the Oort cloud after billions of years. Because it lies at the interface between the dominion of Solar and galactic gravitation, the objects comprising the outer Oort cloud are only weakly bound to the Sun. This in turn allows small perturbations from nearby stars or the Milky Way itself to inject long-period (and possibly Halley-type) comets inside the orbit of Neptune. This process ought to have depleted the sparser, outer cloud and yet long-period comets with orbits well above or below the ecliptic continue to be observed. The Hills cloud is thought to be a secondary reservoir of cometary nuclei and the source of replenishment for the tenuous outer cloud as the latter's numbers are gradually depleted through losses to the inner Solar System. The outer Oort cloud may have trillions of objects larger than , and billions with diameters of . This corresponds to an absolute magnitude of more than 11. On this analysis, "neighboring" objects in the outer cloud are separated by a significant fraction of 1 AU, tens of millions of kilometres. The outer cloud's total mass is not known, but assuming that Halley's Comet is a suitable proxy for the nuclei composing the outer Oort cloud, their combined mass would be roughly , or five Earth masses. Formerly the outer cloud was thought to be more massive by two orders of magnitude, containing up to 380 Earth masses, but improved knowledge of the size distribution of long-period comets has led to lower estimates. No estimates of the mass of the inner Oort cloud have been published as of 2023. If analyses of comets are representative of the whole, the vast majority of Oort-cloud objects consist of ices such as water, methane, ethane, carbon monoxide and hydrogen cyanide. However, the discovery of the object , an object whose appearance was consistent with a D-type asteroid in an orbit typical of a long-period comet, prompted theoretical research that suggests that the Oort cloud population consists of roughly one to two percent asteroids. Analysis of the carbon and nitrogen isotope ratios in both the long-period and Jupiter-family comets shows little difference between the two, despite their presumably vastly separate regions of origin. This suggests that both originated from the original protosolar cloud, a conclusion also supported by studies of granular size in Oort-cloud comets and by the recent impact study of Jupiter-family comet Tempel 1. Origin The Oort cloud is thought to have developed after the formation of planets from the primordial protoplanetary disc approximately 4.6 billion years ago. The most widely accepted hypothesis is that the Oort cloud's objects initially coalesced much closer to the Sun as part of the same process that formed the planets and minor planets. After formation, strong gravitational interactions with young gas giants, such as Jupiter, scattered the objects into extremely wide elliptical or parabolic orbits that were subsequently modified by perturbations from passing stars and giant molecular clouds into long-lived orbits detached from the gas giant region. Recent research has been cited by NASA hypothesizing that a large number of Oort cloud objects are the product of an exchange of materials between the Sun and its sibling stars as they formed and drifted apart and it is suggested that many—possibly the majority—of Oort cloud objects did not form in close proximity to the Sun. Simulations of the evolution of the Oort cloud from the beginnings of the Solar System to the present suggest that the cloud's mass peaked around 800 million years after formation, as the pace of accretion and collision slowed and depletion began to overtake supply. Models by Julio Ángel Fernández suggest that the scattered disc, which is the main source for periodic comets in the Solar System, might also be the primary source for Oort cloud objects. According to the models, about half of the objects scattered travel outward toward the Oort cloud, whereas a quarter are shifted inward to Jupiter's orbit, and a quarter are ejected on hyperbolic orbits. The scattered disc might still be supplying the Oort cloud with material. A third of the scattered disc's population is likely to end up in the Oort cloud after 2.5 billion years. Computer models suggest that collisions of cometary debris during the formation period play a far greater role than was previously thought. According to these models, the number of collisions early in the Solar System's history was so great that most comets were destroyed before they reached the Oort cloud. Therefore, the current cumulative mass of the Oort cloud is far less than was once suspected. The estimated mass of the cloud is only a small part of the 50–100 Earth masses of ejected material. Gravitational interaction with nearby stars and galactic tides modified cometary orbits to make them more circular. This explains the nearly spherical shape of the outer Oort cloud. On the other hand, the Hills cloud, which is bound more strongly to the Sun, has not acquired a spherical shape. Recent studies have shown that the formation of the Oort cloud is broadly compatible with the hypothesis that the Solar System formed as part of an embedded cluster of 200–400 stars. These early stars likely played a role in the cloud's formation, since the number of close stellar passages within the cluster was much higher than today, leading to far more frequent perturbations. In June 2010 Harold F. Levison and others suggested on the basis of enhanced computer simulations that the Sun "captured comets from other stars while it was in its birth cluster." Their results imply that "a substantial fraction of the Oort cloud comets, perhaps exceeding 90%, are from the protoplanetary discs of other stars." In July 2020 Amir Siraj and Avi Loeb found that a captured origin for the Oort Cloud in the Sun's birth cluster could address the theoretical tension in explaining the observed ratio of outer Oort cloud to scattered disc objects, and in addition could increase the chances of a captured Planet Nine. Comets Comets are thought to have two separate points of origin in the Solar System. Short-period comets (those with orbits of up to 200 years) are generally accepted to have emerged from either the Kuiper belt or the scattered disc, which are two linked flat discs of icy debris beyond Neptune's orbit at 30 AU and jointly extending out beyond 100 AU. Very long-period comets, such as C/1999 F1 (Catalina), whose orbits last for millions of years, are thought to originate directly from the outer Oort cloud. Other comets modeled to have come directly from the outer Oort cloud include C/2006 P1 (McNaught), C/2010 X1 (Elenin), Comet ISON, C/2013 A1 (Siding Spring), C/2017 K2, and C/2017 T2 (PANSTARRS). The orbits within the Kuiper belt are relatively stable, so very few comets are thought to originate there. The scattered disc, however, is dynamically active and is far more likely to be the place of origin for comets. Comets pass from the scattered disc into the realm of the outer planets, becoming what are known as centaurs. These centaurs are then sent farther inward to become the short-period comets. There are two main varieties of short-period comets: Jupiter-family comets (those with semi-major axes of less than 5 AU) and Halley-family comets. Halley-family comets, named for their prototype, Halley's Comet, are unusual in that although they are short-period comets, it is hypothesized that their ultimate origin lies in the Oort cloud, not in the scattered disc. Based on their orbits, it is suggested they were long-period comets that were captured by the gravity of the giant planets and sent into the inner Solar System. This process may have also created the present orbits of a significant fraction of the Jupiter-family comets, although the majority of such comets are thought to have originated in the scattered disc. Oort noted that the number of returning comets was far less than his model predicted, and this issue, known as "cometary fading", has yet to be resolved. No dynamical process is known to explain the smaller number of observed comets than Oort estimated. Hypotheses for this discrepancy include the destruction of comets due to tidal stresses, impact or heating; the loss of all volatiles, rendering some comets invisible, or the formation of a non-volatile crust on the surface. Dynamical studies of hypothetical Oort cloud comets have estimated that their occurrence in the outer-planet region would be several times higher than in the inner-planet region. This discrepancy may be due to the gravitational attraction of Jupiter, which acts as a kind of barrier, trapping incoming comets and causing them to collide with it, just as it did with Comet Shoemaker–Levy 9 in 1994. An example of a typical dynamically old comet with an origin in the Oort cloud could be C/2018 F4. Tidal effects Most of the comets seen close to the Sun seem to have reached their current positions through gravitational perturbation of the Oort cloud by the tidal force exerted by the Milky Way. Just as the Moon's tidal force deforms Earth's oceans, causing the tides to rise and fall, the galactic tide also distorts the orbits of bodies in the outer Solar System. In the charted regions of the Solar System, these effects are negligible compared to the gravity of the Sun, but in the outer reaches of the system, the Sun's gravity is weaker and the gradient of the Milky Way's gravitational Galactic Center compresses it along the other two axes; these small perturbations can shift orbits in the Oort cloud to bring objects close to the Sun. The point at which the Sun's gravity concedes its influence to the galactic tide is called the tidal truncation radius. It lies at a radius of 100,000 to 200,000 au, and marks the outer boundary of the Oort cloud. Some scholars theorize that the galactic tide may have contributed to the formation of the Oort cloud by increasing the perihelia (smallest distances to the Sun) of planetesimals with large aphelia (largest distances to the Sun). The effects of the galactic tide are quite complex, and depend heavily on the behaviour of individual objects within a planetary system. Cumulatively, however, the effect can be quite significant: up to 90% of all comets originating from the Oort cloud may be the result of the galactic tide. Statistical models of the observed orbits of long-period comets argue that the galactic tide is the principal means by which their orbits are perturbed toward the inner Solar System. Stellar perturbations and stellar companion hypotheses Besides the galactic tide, the main trigger for sending comets into the inner Solar System is thought to be interaction between the Sun's Oort cloud and the gravitational fields of nearby stars or giant molecular clouds. The orbit of the Sun through the plane of the Milky Way sometimes brings it in relatively close proximity to other stellar systems. For example, it is hypothesized that 70,000 years ago Scholz's Star passed through the outer Oort cloud (although its low mass and high relative velocity limited its effect). During the next 10 million years the known star with the greatest possibility of perturbing the Oort cloud is Gliese 710. This process could also scatter Oort cloud objects out of the ecliptic plane, potentially also explaining its spherical distribution. In 1984, physicist Richard A. Muller postulated that the Sun has an as-yet undetected companion, either a brown dwarf or a red dwarf, in an elliptical orbit within the Oort cloud. This object, known as Nemesis, was hypothesized to pass through a portion of the Oort cloud approximately every 26 million years, bombarding the inner Solar System with comets. However, to date no evidence of Nemesis has been found, and many lines of evidence (such as crater counts), have thrown its existence into doubt. Recent scientific analysis no longer supports the idea that extinctions on Earth happen at regular, repeating intervals. Thus, the Nemesis hypothesis is no longer needed to explain current assumptions. A somewhat similar hypothesis was advanced by astronomer John J. Matese of the University of Louisiana at Lafayette in 2002. He contends that more comets are arriving in the inner Solar System from a particular region of the postulated Oort cloud than can be explained by the galactic tide or stellar perturbations alone, and that the most likely cause would be a Jupiter-mass object in a distant orbit. This hypothetical gas giant was nicknamed Tyche. The WISE mission, an all-sky survey using parallax measurements in order to clarify local star distances, was capable of proving or disproving the Tyche hypothesis. In 2014, NASA announced that the WISE survey had ruled out any object as they had defined it. Future exploration Space probes have yet to reach the area of the Oort cloud. Voyager 1, the fastest and farthest of the interplanetary space probes currently leaving the Solar System, will reach the Oort cloud in about 300 years and would take about 30,000 years to pass through it. However, around 2025, the radioisotope thermoelectric generators on Voyager 1 will no longer supply enough power to operate any of its scientific instruments, preventing any further exploration by Voyager 1. The other four probes currently escaping the Solar System have either already stopped functioning or are predicted to also stop functioning before they reach the Oort cloud. In the 1980s, there was a concept for a probe that could reach 1,000 AU in 50 years, called TAU; among its missions would be to look for the Oort cloud. In the 2014 Announcement of Opportunity for the Discovery program, an observatory to detect the objects in the Oort cloud (and Kuiper belt) called the "Whipple Mission" was proposed. It would monitor distant stars with a photometer, looking for transits up to 10,000 AU away. The observatory was proposed for halo orbiting around L2 with a suggested 5-year mission. It was also suggested that the Kepler space telescope could have been capable of detecting objects in the Oort cloud.
Physical sciences
Solar System
null
22393
https://en.wikipedia.org/wiki/Organelle
Organelle
In cell biology, an organelle is a specialized subunit, usually within a cell, that has a specific function. The name organelle comes from the idea that these structures are parts of cells, as organs are to the body, hence organelle, the suffix -elle being a diminutive. Organelles are either separately enclosed within their own lipid bilayers (also called membrane-bounded organelles) or are spatially distinct functional units without a surrounding lipid bilayer (non-membrane bounded organelles). Although most organelles are functional units within cells, some function units that extend outside of cells are often termed organelles, such as cilia, the flagellum and archaellum, and the trichocyst (these could be referred to as membrane bound in the sense that they are attached to (or bound to) the membrane). Organelles are identified by microscopy, and can also be purified by cell fractionation. There are many types of organelles, particularly in eukaryotic cells. They include structures that make up the endomembrane system (such as the nuclear envelope, endoplasmic reticulum, and Golgi apparatus), and other structures such as mitochondria and plastids. While prokaryotes do not possess eukaryotic organelles, some do contain protein-shelled bacterial microcompartments, which are thought to act as primitive prokaryotic organelles; and there is also evidence of other membrane-bounded structures. Also, the prokaryotic flagellum which protrudes outside the cell, and its motor, as well as the largely extracellular pilus, are often spoken of as organelles. History and terminology In biology, organs are defined as confined functional units within an organism. The analogy of bodily organs to microscopic cellular substructures is obvious, as from even early works, authors of respective textbooks rarely elaborate on the distinction between the two. In the 1830s, Félix Dujardin refuted Ehrenberg theory which said that microorganisms have the same organs of multicellular animals, only minor. Credited as the first to use a diminutive of organ (i.e., little organ) for cellular structures was German zoologist Karl August Möbius (1884), who used the term organula (plural of organulum, the diminutive of Latin organum). In a footnote, which was published as a correction in the next issue of the journal, he justified his suggestion to call organs of unicellular organisms "organella" since they are only differently formed parts of one cell, in contrast to multicellular organs of multicellular organisms. Types While most cell biologists consider the term organelle to be synonymous with cell compartment, a space often bounded by one or two lipid bilayers, some cell biologists choose to limit the term to include only those cell compartments that contain deoxyribonucleic acid (DNA), having originated from formerly autonomous microscopic organisms acquired via endosymbiosis. The first, broader conception of organelles is that they are membrane-bounded structures. However, even by using this definition, some parts of the cell that have been shown to be distinct functional units do not qualify as organelles. Therefore, the use of organelle to also refer to non-membrane bounded structures such as ribosomes is common and accepted. This has led many texts to delineate between membrane-bounded and non-membrane bounded organelles. The non-membrane bounded organelles, also called large biomolecular complexes, are large assemblies of macromolecules that carry out particular and specialized functions, but they lack membrane boundaries. Many of these are referred to as "proteinaceous organelles" as their main structure is made of proteins. Such cell structures include: large RNA and protein complexes: ribosome, spliceosome, vault large protein complexes: proteasome, DNA polymerase III holoenzyme, RNA polymerase II holoenzyme, symmetric viral capsids, complex of GroEL and GroES; membrane protein complexes: porosome, photosystem I, ATP synthase large DNA and protein complexes: nucleosome centriole and microtubule-organizing center (MTOC) cytoskeleton flagellum nucleolus stress granule germ cell granule neuronal transport granule The mechanisms by which such non-membrane bounded organelles form and retain their spatial integrity have been likened to liquid-liquid phase separation. The second, more restrictive definition of organelle includes only those cell compartments that contain deoxyribonucleic acid (DNA), having originated from formerly autonomous microscopic organisms acquired via endosymbiosis. Using this definition, there would only be two broad classes of organelles (i.e. those that contain their own DNA, and have originated from endosymbiotic bacteria): mitochondria (in almost all eukaryotes) plastids (e.g. in plants, algae, and some protists). Other organelles are also suggested to have endosymbiotic origins, but do not contain their own DNA (notably the flagellum – see evolution of flagella). Eukaryotic organelles Eukaryotic cells are structurally complex, and by definition are organized, in part, by interior compartments that are themselves enclosed by lipid membranes that resemble the outermost cell membrane. The larger organelles, such as the nucleus and vacuoles, are easily visible with the light microscope. They were among the first biological discoveries made after the invention of the microscope. Not all eukaryotic cells have each of the organelles listed below. Exceptional organisms have cells that do not include some organelles (such as mitochondria) that might otherwise be considered universal to eukaryotes. The several plastids including chloroplasts are distributed among some but not all eukaryotes. There are also occasional exceptions to the number of membranes surrounding organelles, listed in the tables below (e.g., some that are listed as double-membrane are sometimes found with single or triple membranes). In addition, the number of individual organelles of each type found in a given cell varies depending upon the function of that cell. The cell membrane and cell wall are not organelles. Other related structures: cytosol endomembrane system nucleosome microtubule Prokaryotic organelles Prokaryotes are not as structurally complex as eukaryotes, and were once thought to have little internal organization, and lack cellular compartments and internal membranes; but slowly, details are emerging about prokaryotic internal structures that overturn these assumptions. An early false turn was the idea developed in the 1970s that bacteria might contain cell membrane folds termed mesosomes, but these were later shown to be artifacts produced by the chemicals used to prepare the cells for electron microscopy. However, there is increasing evidence of compartmentalization in at least some prokaryotes. Research has revealed that at least some prokaryotes have microcompartments, such as carboxysomes. These subcellular compartments are 100–200 nm in diameter and are enclosed by a shell of proteins. Even more striking is the description of membrane-bounded magnetosomes in bacteria, reported in 2006. The bacterial phylum Planctomycetota has revealed a number of compartmentalization features. The Planctomycetota cell plan includes intracytoplasmic membranes that separates the cytoplasm into paryphoplasm (an outer ribosome-free space) and pirellulosome (or riboplasm, an inner ribosome-containing space). Membrane-bounded anammoxosomes have been discovered in five Planctomycetota "anammox" genera, which perform anaerobic ammonium oxidation. In the Planctomycetota species Gemmata obscuriglobus, a nucleus-like structure surrounded by lipid membranes has been reported. Compartmentalization is a feature of prokaryotic photosynthetic structures. Purple bacteria have "chromatophores", which are reaction centers found in invaginations of the cell membrane. Green sulfur bacteria have chlorosomes, which are photosynthetic antenna complexes found bonded to cell membranes. Cyanobacteria have internal thylakoid membranes for light-dependent photosynthesis; studies have revealed that the cell membrane and the thylakoid membranes are not continuous with each other.
Biology and health sciences
Organelles and other cell parts
null
22431
https://en.wikipedia.org/wiki/Oracle%20machine
Oracle machine
In complexity theory and computability theory, an oracle machine is an abstract machine used to study decision problems. It can be visualized as a Turing machine with a black box, called an oracle, which is able to solve certain problems in a single operation. The problem can be of any complexity class. Even undecidable problems, such as the halting problem, can be used. Oracles An oracle machine can be conceived as a Turing machine connected to an oracle. The oracle, in this context, is an entity capable of solving some problem, which for example may be a decision problem or a function problem. The problem does not have to be computable; the oracle is not assumed to be a Turing machine or computer program. The oracle is simply a "black box" that is able to produce a solution for any instance of a given computational problem: A decision problem is represented as a set A of natural numbers (or strings). An instance of the problem is an arbitrary natural number (or string). The solution to the instance is "YES" if the number (string) is in the set, and "NO" otherwise. A function problem is represented by a function f from natural numbers (or strings) to natural numbers (or strings). An instance of the problem is an input x for f. The solution is the value f(x). An oracle machine can perform all of the usual operations of a Turing machine, and can also query the oracle to obtain a solution to any instance of the computational problem for that oracle. For example, if the problem is a decision problem for a set A of natural numbers, the oracle machine supplies the oracle with a natural number, and the oracle responds with "yes" or "no" stating whether that number is an element of A. Definitions There are many equivalent definitions of oracle Turing machines, as discussed below. The one presented here is from . An oracle machine, like a Turing machine, includes: a work tape: a sequence of cells without beginning or end, each of which may contain a B (for blank) or a symbol from the tape alphabet; a read/write head, which rests on a single cell of the work tape and can read the data there, write new data, and increment or decrement its position along the tape; a control mechanism, which can be in one of a finite number of states, and which will perform different actions (reading data, writing data, moving the control mechanism, and changing states) depending on the current state and the data being read. In addition to these components, an oracle machine also includes: an oracle tape, which is a semi-infinite tape separate from the work tape. The alphabet for the oracle tape may be different from the alphabet for the work tape. an oracle head which, like the read/write head, can move left or right along the oracle tape reading and writing symbols; two special states: the ASK state and the RESPONSE state. From time to time, the oracle machine may enter the ASK state. When this happens, the following actions are performed in a single computational step: the contents of the oracle tape are viewed as an instance of the oracle's computational problem; the oracle is consulted, and the contents of the oracle tape are replaced with the solution to that instance of the problem; the oracle head is moved to the first square on the oracle tape; the state of the oracle machine is changed to RESPONSE. The effect of changing to the ASK state is thus to receive, in a single step, a solution to the problem instance that is written on the oracle tape. Alternative definitions There are many alternative definitions to the one presented above. Many of these are specialized for the case where the oracle solves a decision problem. In this case: Some definitions, instead of writing the answer to the oracle tape, have two special states YES and NO in addition to the ASK state. When the oracle is consulted, the next state is chosen to be YES if the contents of the oracle tape are in the oracle set, and chosen to the NO if the contents are not in the oracle set. Some definitions eschew the separate oracle tape. When the oracle state is entered, a tape symbol is specified. The oracle is queried with the number of times that this tape symbol appears on the work tape. If that number is in the oracle set, the next state is the YES state; if it is not, the next state is the NO state. Another alternative definition makes the oracle tape read-only, and eliminates the ASK and RESPONSE states entirely. Before the machine is started, the indicator function of the oracle set is written on the oracle tape using symbols 0 and 1. The machine is then able to query the oracle by scanning to the correct square on the oracle tape and reading the value located there. These definitions are equivalent from the point of view of Turing computability: a function is oracle-computable from a given oracle under all of these definitions if it is oracle-computable under any of them. The definitions are not equivalent, however, from the point of view of computational complexity. A definition such as the one by van Melkebeek, using an oracle tape which may have its own alphabet, is required in general. Complexity classes of oracle machines The complexity class of decision problems solvable by an algorithm in class A with an oracle for a language L is called AL. For example, PSAT is the class of problems solvable in polynomial time by a deterministic Turing machine with an oracle for the Boolean satisfiability problem. The notation AB can be extended to a set of languages B (or a complexity class B), by using the following definition: When a language L is complete for some class B, then AL=AB provided that machines in A can execute reductions used in the completeness definition of class B. In particular, since SAT is NP-complete with respect to polynomial time reductions, PSAT=PNP. However, if A = DLOGTIME, then ASAT may not equal ANP. (The definition of given above is not completely standard. In some contexts, such as the proof of the time and space hierarchy theorems, it is more useful to assume that the abstract machine defining class only has access to a single oracle for one language. In this context, is not defined if the complexity class does not have any complete problems with respect to the reductions available to .) It is understood that NP ⊆ PNP, but the question of whether NPNP, PNP, NP, and P are equal remains tentative at best. It is believed they are different, and this leads to the definition of the polynomial hierarchy. Oracle machines are useful for investigating the relationship between complexity classes P and NP, by considering the relationship between PA and NPA for an oracle A. In particular, it has been shown there exist languages A and B such that PA=NPA and PB≠NPB. The fact the P = NP question relativizes both ways is taken as evidence that answering this question is difficult, because a proof technique that relativizes (i.e., unaffected by the addition of an oracle) will not answer the P = NP question. Most proof techniques relativize. One may consider the case where an oracle is chosen randomly from among all possible oracles (an infinite set). It has been shown in this case, that with probability 1, PA≠NPA. When a question is true for almost all oracles, it is said to be true for a random oracle. This choice of terminology is justified by the fact that random oracles support a statement with probability 0 or 1 only. (This follows from Kolmogorov's zero–one law.) This is only weak evidence that P≠NP, since a statement may be true for a random oracle but false for ordinary Turing machines; for example, IPA≠PSPACEA for a random oracle A but IP = PSPACE. Oracles and halting problems A machine with an oracle for the halting problem can determine whether particular Turing machines will halt on particular inputs, but it cannot determine, in general, whether machines equivalent to itself will halt. This creates a hierarchy of machines, each with a more powerful halting oracle and an even harder halting problem. This hierarchy of machines can be used to define the arithmetical hierarchy. Applications to cryptography In cryptography, oracles are used to make arguments for the security of cryptographic protocols where a hash function is used. A security reduction (proof of security) for the protocol is given in the case where, instead of a hash function, a random oracle answers each query randomly but consistently; the oracle is assumed to be available to all parties including the attacker, as the hash function is. Such a proof shows that unless the attacker solves the hard problem at the heart of the security reduction, they must make use of some interesting property of the hash function to break the protocol; they cannot treat the hash function as a black box (i.e., as a random oracle).
Mathematics
Computability theory
null
22433
https://en.wikipedia.org/wiki/Orangutan
Orangutan
Orangutans are great apes native to the rainforests of Indonesia and Malaysia. They are now found only in parts of Borneo and Sumatra, but during the Pleistocene they ranged throughout Southeast Asia and South China. Classified in the genus Pongo, orangutans were originally considered to be one species. From 1996, they were divided into two species: the Bornean orangutan (P. pygmaeus, with three subspecies) and the Sumatran orangutan (P. abelii). A third species, the Tapanuli orangutan (P. tapanuliensis), was identified definitively in 2017. The orangutans are the only surviving species of the subfamily Ponginae, which diverged genetically from the other hominids (gorillas, chimpanzees, and humans) between 19.3 and 15.7 million years ago. The most arboreal of the great apes, orangutans spend most of their time in trees. They have proportionally long arms and short legs, and have reddish-brown hair covering their bodies. Adult males weigh about , while females reach about . Dominant adult males develop distinctive cheek pads or flanges and make long calls that attract females and intimidate rivals; younger subordinate males do not and more resemble adult females. Orangutans are the most solitary of the great apes: social bonds occur primarily between mothers and their dependent offspring. Fruit is the most important component of an orangutan's diet, but they will also eat vegetation, bark, honey, insects and bird eggs. They can live over 30 years, both in the wild and in captivity. Orangutans are among the most intelligent primates. They use a variety of sophisticated tools and construct elaborate sleeping nests each night from branches and foliage. The apes' learning abilities have been studied extensively. There may be distinctive cultures within populations. Orangutans have been featured in literature and art since at least the 18th century, particularly in works that comment on human society. Field studies of the apes were pioneered by primatologist Birutė Galdikas and they have been kept in captive facilities around the world since at least the early 19th century. All three orangutan species are considered critically endangered. Human activities have caused severe declines in populations and ranges. Threats to wild orangutan populations include poaching (for bushmeat and retaliation for consuming crops), habitat destruction and deforestation (for palm oil cultivation and logging), and the illegal pet trade. Several conservation and rehabilitation organisations are dedicated to the survival of orangutans in the wild. Etymology Most Western sources attribute the name "orangutan" (also written orang-utan, orang utan, orangutang, and ourang-outang) to the Malay words orang, meaning "person", and hutan, meaning "forest". The Malay used the term to indicate forest-dwelling humans; the first recorded Malay use of "orang-utan" referring to the ape identifies it as a Western term. (There is, however, some evidence to suggest that the term may have been used in regard to apes in premodern Old Malay.) In Western sources, the first printed attestation of the word for the apes is in Dutch physician Jacobus Bontius' 1631 Historiae naturalis et medicae Indiae orientalis. He reported that Malays claimed the ape could talk, but preferred not to "lest he be compelled to labour". The word appeared in several German-language descriptions of Indonesian zoology in the 17th century. It has been argued that the word comes specifically from the Banjarese variety of Malay, but the age of the Old Javanese sources mentioned above make Old Malay a more likely origin for the term. Cribb and colleagues (2014) suggest that Bontius' account referred not to apes (as this description was from Java where the apes were not known to be from) but to humans suffering some serious medical condition (most likely cretinism) and that his use of the word was misunderstood by Nicolaes Tulp, who was the first to use the term in a publication a decade later. The word was first attested in English in 1693 by physician John Bulwer in the form Orang-Outang, and variants ending with -ng are found in many languages. This spelling (and pronunciation) has remained in use in English up to the present but has come to be regarded as incorrect. The loss of "h" in hutan and the shift from -ng to -n has been taken to suggest the term entered English through Portuguese. In Malay, the term was first attested in 1840, not as an indigenous name but referring to how the English called the animal. The word 'orangutan' in modern Malay and Indonesian was borrowed from English or Dutch in the 20th century—explaining the missing initial 'h' of 'hutan'. The name of the genus, Pongo, comes from a 16th-century account by Andrew Battel, an English sailor held prisoner by the Portuguese in Angola, which describes two anthropoid "monsters" named Pongo and Engeco. He is now believed to have been describing gorillas, but in the 18th century, the terms orangutan and pongo were used for all great apes. French naturalist Bernard Germain de Lacépède used the term Pongo for the genus in 1799. Battel's "Pongo", in turn, is from the Kongo word mpongi or other cognates from the region: Lumbu pungu, Vili mpungu, or Yombi yimpungu. Taxonomy and phylogeny The orangutan was first described scientifically in 1758 in the Systema Naturae of Carl Linnaeus as Homo troglodytes. It was renamed Simia pygmaeus in 1760 by his student Christian Emmanuel Hopp and given the name Pongo by Lacépède in 1799. The populations on the two islands were suggested to be separate species when P. abelii was described by French naturalist René Lesson in 1827. In 2001, P. abelii was confirmed as a full species based on molecular evidence published in 1996, and three distinct populations on Borneo were elevated to subspecies (P. p. pygmaeus, P. p. morio and P. p. wurmbii). The description in 2017 of a third species, P. tapanuliensis, from Sumatra south of Lake Toba, came with a surprising twist: it is more closely related to the Bornean species, P. pygmaeus than to its fellow Sumatran species, P. abelii. The Sumatran orangutan genome was sequenced in January 2011. Following humans and chimpanzees, the Sumatran orangutan became the third species of great ape to have its genome sequenced. Subsequently, the Bornean species had its genome sequenced. Bornean orangutans (P. pygmaeus) have less genetic diversity than in Sumatran ones (P. abelii), despite populations being six to seven times higher in Borneo. The researchers hope these data may help conservationists preserve the endangered ape, as well as learn more about human genetic diseases. Similarly to gorillas and chimpanzees, orangutans have 48 diploid chromosomes, in contrast to humans, which have 46. According to molecular evidence, within apes (superfamily Hominoidea), the gibbons diverged during the early Miocene between 24.1 and 19.7 million years ago (mya), and the orangutans diverged from the African great ape lineage between 19.3 and 15.7 mya. Israfil and colleagues (2011) estimated based on mitochondrial, Y-linked, and X-linked loci that the Sumatran and Bornean species diverged 4.9 to 2.9 mya. By contrast, the 2011 genome study suggested that these two species diverged as recently as circa 400,000 years ago. The study also found that orangutans evolved at a slower pace than both chimpanzees and humans. A 2017 genome study found that the Bornean and Tapanuli orangutans diverged from Sumatran orangutans about 3.4 mya, and from each other around 2.4 mya. Millions of years ago, orangutans travelled from mainland Asia to Sumatra and then Borneo as the islands were connected by land bridges during the recent glacial periods when sea levels were much lower. The present range of Tapanuli orangutans is thought to be close to where ancestral orangutans first entered what is now Indonesia from mainland Asia. Fossil record The three orangutan species are the only extant members of the subfamily Ponginae. This subfamily also includes extinct apes such as Lufengpithecus, which occurred 8–2 mya in southern China and Thailand; Indopithecus, which lived in India from 9.2 to 8.6 mya; and Sivapithecus, which lived in India and Pakistan from 12.5 mya until 8.5 mya. These animals likely lived in drier and cooler environments than orangutans do today. Khoratpithecus piriyai, which lived 5–7 mya in Thailand, is believed to be the closest known relative of the living orangutans and inhabited similar environments. The largest known primate, Gigantopithecus, was also a member of Ponginae and lived in China, from 2 mya to 300,000 years ago. The oldest known record of Pongo is from the Early Pleistocene of Chongzuo, consisting of teeth ascribed to extinct species P. weidenreichi. Pongo is found as part of the faunal complex in the Pleistocene cave assemblage in Vietnam, alongside Giganopithecus, though it is known only from teeth. Some fossils described under the name P. hooijeri have been found in Vietnam, and multiple fossil subspecies have been described from several parts of southeastern Asia. It is unclear if these belong to P. pygmaeus or P. abelii or, in fact, represent distinct species. During the Pleistocene, Pongo had a far more extensive range than at present, extending throughout Sundaland and mainland Southeast Asia and South China. Teeth of orangutans are known from Peninsular Malaysia that date to 60,000 years ago. The youngest remains from South China, which are teeth assigned to P. weidenreichi, date to between 66 and 57,000 years ago. The range of orangutans had contracted significantly by the end of the Pleistocene, most likely because of the reduction of forest habitat during the Last Glacial Maximum. They may have nevertheless survived into the Holocene in Cambodia and Vietnam. Characteristics Orangutans display significant sexual dimorphism; females typically stand tall and weigh around , while adult males stand tall and weigh . Compared to humans, they have proportionally long arms, a male orangutan having an arm span of about , and short legs. They are covered in long reddish hair that starts out bright orange and darkens to maroon or chocolate with age, while the skin is grey-black. Though largely hairless, males' faces can develop some hair, giving them a beard. Orangutans have small ears and noses; the ears are unlobed. The mean endocranial volume is 397 cm3. The cranium is elevated relative to the face, which is incurved and prognathous. Compared to chimpanzees and gorillas, the brow ridge of an orangutan is underdeveloped. Females and juveniles have relatively circular skulls and thin faces while mature males have a prominent sagittal crest, large cheek pads or flanges, extensive throat pouches and long canines. The cheek pads are made mostly of fatty tissue and are supported by the musculature of the face. The throat pouches act as resonance chambers for making long calls. Orangutan hands have four long fingers but a dramatically shorter opposable thumb for a strong grip on branches as they travel high in the trees. The resting configuration of the fingers is curved, creating a suspensory hook grip. With the thumb out of the way, the fingers (and hands) can grip securely around objects with a small diameter by resting the tops of the fingers against the inside of the palm, thus creating a double-locked grip. Their feet have four long toes and an opposable big toe, giving them hand-like dexterity. The hip joints also allow for their legs to rotate similarly to their arms and shoulders. Orangutans move through the trees by both vertical climbing and suspension. Compared to other great apes, they infrequently descend to the ground where they are more cumbersome. Unlike gorillas and chimpanzees, orangutans are not true knuckle-walkers, instead bending their digits and walking on the sides of their hands and feet. Compared to their relatives in Borneo, Sumatran orangutans are more slender with paler and longer hair and a longer face. Tapanuli orangutans resemble Sumatran orangutans more than Bornean orangutans in body build and hair colour. They have shaggier hair, smaller skulls, and flatter faces than the other two species. Ecology and behaviour Orangutans are mainly arboreal and inhabit tropical rainforest, particularly lowland dipterocarp and old secondary forest. Populations are more concentrated near riverside habitats, such as freshwater and peat swamp forest, while drier forests away from the flooded areas have fewer apes. Population density also decreases at higher elevations. Orangutans occasionally enter grasslands, cultivated fields, gardens, young secondary forest, and shallow lakes. Most of the day is spent feeding, resting, and travelling. They start the day feeding for two to three hours in the morning. They rest during midday, then travel in the late afternoon. When evening arrives, they prepare their nests for the night. Potential predators of orangutans include tigers, clouded leopards and wild dogs. The most common orangutan parasites are nematodes of the genus Strongyloides and the ciliate Balantidium coli. Among Strongyloides, the species S. fuelleborni and S. stercoralis are reported in young individuals. Orangutans also use the plant species Dracaena cantleyi as an anti-inflammatory balm. Captive animals may suffer an upper respiratory tract disease. Diet and feeding Orangutans are primarily fruit-eaters, which can take up 57–80% of their foraging time. Even during times of scarcity, fruit is 16% of their feeding time. Fruits with soft pulp, arils or seed-walls are consumed the most, particularly figs but also drupes and berries. Orangutans are thought to be the sole fruit disperser for some plant species including the vine species Strychnos ignatii which contains the toxic alkaloid strychnine. Orangutans also include leaves in their diet, which take up 25% of their average foraging time. Leaves are eaten more when fruit is less available, but even during times of fruit abundance, orangutans will eat leaves 11–20% of the time. They appear to depend on the leaf and stem material of Borassodendron borneensis during times of low fruit abundance. Other food items consumed by the apes include bark, honey, bird eggs, insects and small vertebrates including slow lorises. In some areas, orangutans may practise geophagy, which involves consuming soil and other earth substances. They will uproot soil from the ground as well as eat shelter tubes from tree trunks. Orangutans also visit the sides of cliffs or earth depressions for their mineral licks. Orangutans may eat soils for their anti-toxic kaolin minerals, since their diet contains toxic tannins and phenolic acids. Social life The social structure of the orangutan can be best described as solitary but social; they live a more solitary lifestyle than the other great apes. Bornean orangutans are generally more solitary than Sumatran orangutans. Most social bonds occur between adult females and their dependent and weaned offspring. Resident females live with their offspring in defined home ranges that overlap with those of other adult females, which may be their immediate relatives. One to several resident female home ranges are encompassed within the home range of a resident male, who is their main mating partner. Interactions between adult females range from friendly to avoidance to antagonistic. Flanged males are often hostile to both other flanged males and unflanged males, while unflanged males are more peaceful towards each other. Orangutans disperse and establish their home ranges by age 11. Females tend to live near their birth range, while males disperse farther but may still visit their birth range within their larger home range. They enter a transient phase, which lasts until a male can challenge and displace a dominant, resident male from his home range. Both resident and transient orangutans aggregate on large fruiting trees to feed. The fruits tend to be abundant, so competition is low and individuals may engage in social interactions. Orangutans will also form travelling groups with members moving between different food sources. They are often consortships between an adult male and a female. Social grooming is uncommon among orangutans. Communication Orangutans communicate with various vocals and sounds. Males will make long calls, both to attract females and to advertise themselves to other males. These calls have three components; they begin with grumbles, peak with pulses and end with bubbles. Both sexes will try to intimidate conspecifics with a series of low frequency noises known collectively as the "rolling call". When uncomfortable, an orangutan will produce a "kiss squeak", which involves sucking in air through pursed lips. Mothers produce throatscrapes to keep in contact with their offspring. Infants make soft hoots when distressed. When building a nest, orangutans will produce smacks or blow raspberries. Orangutan calls display consonant- and vowel-like components and they maintain their meaning over great distances. Mother orangutans and offspring also use several different gestures and expressions such as beckoning, stomping, lower lip pushing, object shaking and "presenting" a body part. These communicate goals such as "acquire object", "climb on me", "climb on you", "climb over", "move away", "play change: decrease intensity", "resume play" and "stop that". Reproduction and development Males become sexually mature at around age 15. They may exhibit arrested development by not developing the distinctive cheek pads, pronounced throat pouches, long fur, or long calls until a resident dominant male is absent. The transformation from unflanged to flanged can occur quickly. Flanged males attract females in oestrous with their characteristic long calls, which may also suppress development in younger males. Unflanged males wander widely in search of oestrous females and upon finding one, may force copulation on her, the occurrence of which is unusually high among mammals. Females prefer to mate with the fitter flanged males, forming pairs with them and benefiting from their protection. Non-ovulating females do not usually resist copulation with unflanged males, as the chance of conception is low. Homosexual behaviour has been recorded in the context of both affiliative and aggressive interactions. Unlike females of other non-human great ape species, orangutans do not exhibit sexual swellings to signal fertility. A female first gives birth around 15 years of age and they have a six- to nine-year interbirth interval, the longest among the great apes. Gestation is around nine months long and infants are born at a weight of . Usually only a single infant is born; twins are a rare occurrence. Unlike many other primates, male orangutans do not seem to practise infanticide. This may be because they cannot ensure they will sire a female's next offspring, because she does not immediately begin ovulating again after her infant dies. There is evidence that females with offspring under six years old generally avoid adult males. Females do most of the caring of the young. The mother will carry the infant while travelling, suckle it and sleep with it. During its first four months, the infant is almost never without physical contact and clings to its mother's belly. In the following months, the amount of physical contact the infant has with its mother declines. When an orangutan reaches the age of one-and-a-half years, its climbing skills improve and it will travel through the canopy holding hands with other orangutans, a behaviour known as "buddy travel". After two years of age, juvenile orangutans will begin to move away from their mothers temporarily. They reach adolescence at six or seven years of age and are able to live alone but retain some connections with their mothers. Females may nurse their offspring for up to eight years, which is more than any other mammal. Typically, orangutans live over 30 years both in the wild and in captivity. Nesting Orangutans build nests specialised for either day or night use. These are carefully constructed; young orangutans learn from observing their mother's nest-building behaviour. In fact, nest-building allows young orangutans to become less dependent on their mother. From six months of age onwards, orangutans practise nest-building and gain proficiency by the time they are three years old. Construction of a night nest is done by following a sequence of steps. Initially, a suitable tree is located. Orangutans are choosy about sites, though nests can be found in many tree species. To establish a foundation, the ape grabs the large branches under it and bends them so they join. The orangutan then does the same to smaller, leafier branches to create a "mattress". After this, the ape stands and braids the tips of branches into the mattress. Doing this increases the stability of the nest. Orangutans make their nests more comfortable by creating "pillows", "blankets", "roofs" and "bunk-beds". Intelligence Orangutans are among the most intelligent non-human primates. Experiments suggest they can track the displacement of objects both visible and hidden. Zoo Atlanta has a touch-screen computer on which their two Sumatran orangutans play games. A 2008 study of two orangutans at the Leipzig Zoo showed orangutans may practise "calculated reciprocity", which involves an individual aiding another with the expectation of being paid back. Orangutans are the first nonhuman species documented to do so. In a 1997 study, two captive adult orangutans were tested with the cooperative pulling paradigm. Without any training, the orangutans succeeded in pulling off an object to get food in the first session. Over the course of 30 sessions, the apes succeeded more quickly, having learned to coordinate. An adult orangutan has been documented to pass the mirror test, indicating self-awareness. Mirror tests with a 2-year-old orangutan failed to reveal self-recognition. Studies in the wild indicate that flanged male orangutans plan their movements in advance and signal them to other individuals. Experiments have also suggested that orangutans can communicate about things that are not present: mother orangutans remain silent in the presence of a perceived threat but when it passes, the mother produces an alarm call to their offspring to teach them about the danger. Orangutans and other great apes show laughter-like vocalisations in response to physical contact such as wrestling, play chasing or tickling. This suggests that laughter derived from a common origin among primate species and therefore evolved before the origin of humans. Orangutans can learn to mimic new sounds by purposely controlling the vibrations of their vocal folds, a trait that led to speech in humans. Bonnie, an orangutan at the US National Zoo, was recorded spontaneously whistling after hearing a caretaker. She appears to whistle without expecting a food reward. Tool use and culture Tool use in orangutans was observed by primatologist Birutė Galdikas in ex-captive populations. Orangutans in Suaq Balimbing were recorded to develop a tool kit for use in foraging which consisted of both insect-extraction sticks for use in the hollows of trees and seed-extraction sticks for harvesting seeds from hard-husked fruit. The orangutans adjusted their tools according to the task at hand, and preference was given to oral tool use. This preference was also found in an experimental study of captive orangutans. Orangutans have been observed to use sticks to poke at catfish, causing them to leap out of the water so the orangutan can grab them. Orangutan have also been documented to keep tools for later. When building a nest, orangutans appear to be able to determine which branches would better support their body weight. Primatologist Carel P. van Schaik and biological anthropologist Cheryl D. Knott further investigated tool use in different wild orangutan populations. They compared geographic variations in tool use related to the processing of Neesia fruit. The orangutans of Suaq Balimbing were found to be avid users of insect and seed-extraction tools when compared to other wild orangutans. The scientists suggested these differences are cultural as they do not correlate with habitat. The orangutans at Suaq Balimbing are closely spaced and relatively tolerant of each other; this creates favourable conditions for the spreading of new behaviours. Further evidence that highly social orangutans are more likely to exhibit cultural behaviours came from a study of leaf-carrying behaviours of formerly captive orangutans that were being rehabilitated on the island of Kaja in Borneo. Wild orangutans in Tuanan, Borneo, were reported to use tools in acoustic communication. They use leaves to amplify the kiss squeak sounds they produce. The apes may employ this method of amplification to deceive the listener into believing they are larger animals. In 2003, researchers from six different orangutan field sites who used the same behavioural coding scheme compared the behaviours of the animals from each site. They found each orangutan population used different tools. The evidence suggested the differences were cultural: first, the extent of the differences increased with distance, suggesting cultural diffusion was occurring, and second, the size of the orangutans' cultural repertoire increased according to the amount of social contact present within the group. Social contact facilitates cultural transmission. During a field observation in 2022, a male Sumatran orangutan, known to researchers as Rakus, chewed Fibraurea tinctoria vine leaves and applied the mashed plant material to an open wound on his face. According to primatologists who had been observing Rakus at a nature preserve, "Five days later the facial wound was closed, while within a few weeks it had healed, leaving only a small scar". Personhood In June 2008, Spain would become the first country to recognise the rights of some non-human great apes, based on the guidelines of the Great Ape Project, which are that chimpanzees, bonobos, orangutans, and gorillas not to be used for animal experiments. In December 2014, a court in Argentina ruled that an orangutan named Sandra at the Buenos Aires Zoo must be moved to a sanctuary in Brazil to provide her "partial or controlled freedom". Sandra has since been relocated to The Center for Great Apes in the United States, as it is the only accredited orangutan sanctuary in the Americas. Animal rights groups like Great Ape Project Argentina argued the ruling should apply to all species in captivity, and legal specialists from the Argentina's Federal Chamber of Criminal Cassatio considered the ruling applicable only to non-human hominids. Orangutans and humans Orangutans were known to the native people of Sumatra and Borneo for millennia. The apes are known as maias in Sarawak and mawas in other parts of Borneo and in Sumatra. While some communities hunted them for food and decoration, others placed taboos on such practices. In central Borneo, some traditional folk beliefs consider it bad luck to look an orangutan in the face. Some folk tales involve orangutans mating with and kidnapping humans. There are even stories of hunters being captured by female orangutans. Europeans became aware of the existence of the orangutan in the 17th century. Explorers in Borneo hunted them extensively during the 19th century. In 1779, Dutch anatomist Petrus Camper, who observed the animals and dissected some specimens, gave the first scientific description of the orangutan. Camper mistakenly thought that flanged and unflanged male orangutans were different species, a misconception corrected after his death. Little was known about orangutan behaviour until the field studies of Birutė Galdikas, who became a leading authority on the apes. When she arrived in Borneo in 1971, Galdikas settled into a primitive bark-and-thatch hut at a site she dubbed Camp Leakey, in Tanjung Puting. She studied orangutans for the next four years and developed her PhD thesis for UCLA. Galdikas became an outspoken advocate for orangutans and the preservation of their rainforest habitat, which is rapidly being devastated by loggers, palm oil plantations, gold miners, and unnatural forest fires. Along with Jane Goodall and Dian Fossey, Galdikas is considered to be one of Leakey's Angels, named after anthropologist Louis Leakey. In fiction Orangutans first appeared in Western fiction in the 18th century and have been used to comment on human society. Written by the pseudonymous A. Ardra, Tintinnabulum naturae (The Bell of Nature, 1772) is told from the point of view of a human-orangutan hybrid who calls himself the "metaphysician of the woods". Around 50 years later, the anonymously written work The Orang Outang is narrated by a pure orangutan in captivity in the US, writing a letter critiquing Boston society to her friend in Java. Thomas Love Peacock's 1817 novel Melincourt features Sir Oran Haut Ton, an orangutan who lives among English people and becomes a candidate for Member of Parliament. The novel satirises the class and political system of Britain. Oran's purity and status as a "natural man" stands in contrast to the immorality and corruption of the "civilised" humans. In Frank Challice Constable's The Curse of Intellect (1895), the protagonist Reuben Power travels to Borneo and captures an orangutan to train it to speak so he can "know what a beast like that might think of us". Orangutans are featured prominently in the 1963 science fiction novel Planet of the Apes by Pierre Boulle and the media franchise derived from it. They are typically portrayed as bureaucrats like Dr. Zaius, the science minister. Orangutans are sometimes portrayed as antagonists, notably in the 1832 Walter Scott novel Count Robert of Paris and the 1841 Edgar Allan Poe short story The Murders in the Rue Morgue. Disney's 1967 animated musical adaptation of The Jungle Book added a jazzy orangutan named King Louie, who tries to get Mowgli to teach him how to make fire. The 1986 horror film Link features an intelligent orangutan which serves a university professor but has sinister motives; he plots against humanity and stalks a female student assistant. Other stories have portrayed orangutans helping humans, such as The Librarian in Terry Pratchett's fantasy novels Discworld and in Dale Smith's 2004 novel What the Orangutan Told Alice. More comical portrayals of the orangutan include the 1996 film Dunston Checks In. In captivity By the early 19th century, orangutans were being kept in captivity. In 1817, an orangutan joined several other animals in London's Exeter Exchange. He rejected the company of other animals, aside from a dog, and preferred to be with humans. He was occasionally taken on coach rides clothed in a smock-frock and hat and even given drinks at an inn where he behaved politely for the hosts. The London Zoo housed a female orangutan named Jenny who was dressed in human clothing and learned to drink tea. She is remembered for her meeting with Charles Darwin who compared her reactions to those of a human child. Zoos and circuses in the Western world would continue to use orangutans and other simians as sources for entertainment, training them to behave like humans at tea parties and to perform tricks. Notable orangutan "character actors" include: Jacob and Rosa of the Tierpark Hagenbeck in Hamburg, Germany, in the early 20th century; Joe Martin of Universal City Zoo in the 1910s and 1920s; and Jiggs of the San Diego Zoo in the 1930s and 1940s. Animal rights groups have urged a stop to such acts, considering them abusive. Starting in the 1960s, zoos became more concerned with education and orangutans' exhibits were designed to mimic their natural environment and let them display their natural behaviours. Ken Allen, an orangutan of the San Diego Zoo, became world famous in the 1980s for multiple escapes from his enclosures. He was nicknamed "the hairy Houdini" and was the subject of a fan club, T-shirts, bumper stickers and a song titled The Ballad of Ken Allen. Galdikas reported that her cook was sexually assaulted by a captive male orangutan. The ape may have suffered from a skewed species identity and forced copulation is a standard mating strategy for low-ranking male orangutans. American animal trafficker Frank Buck claimed to have seen human mothers acting as wet nurses to orphaned orangutan babies in hopes of keeping them alive long enough to sell to a trader, which would be an instance of human–animal breastfeeding. Conservation Status and threats All three species are critically endangered according to the IUCN Red List of mammals. They are legally protected from capture, harm or killing in both Malaysia and Indonesia, and are listed under Appendix I by CITES, which prohibits their unlicensed trade under international law. The Bornean orangutan range has become more fragmented, with few or no apes documented in the southeast. The largest remaining population is found in the forest around the Sabangau River, but this environment is at risk. The Sumatran orangutan is found only in the northern part of Sumatra, most of the population inhabiting the Leuser Ecosystem. The Tapanuli orangutan is found only in the Batang Toru forest of Sumatra. Birutė Galdikas wrote that orangutans were already threatened by poaching and deforestation when she began studying them in 1971. By the 2000s, orangutan habitats decreased rapidly because of logging, mining and fragmentation by roads. A major factor has been the conversion of vast areas of tropical forest to palm oil plantations in response to international demand. Hunting is also a major problem, as is the illegal pet trade. Orangutans may be killed for the bushmeat trade and bones are secretly sold in souvenir shops in several cities in Indonesian Borneo. Conflicts between locals and orangutans also pose a threat. Orangutans that have lost their homes often raid agricultural areas and end up being killed by villagers. Locals may also be motivated to kill orangutans for food or because of their perceived danger. Mother orangutans are killed so their infants can be sold as pets. Between 2012 and 2017, the Indonesian authorities, with the aid of the Orangutan Information Center, seized 114 orangutans, 39 of which were pets. Estimates in the 2000s found that around 6,500 Sumatran orangutans and around 54,000 Bornean orangutans remain in the wild. A 2016 study estimates a population of 14,613 Sumatran orangutans in the wild, twice that of previous population estimates, while 2016 estimates suggest 104,700 Bornean orangutans exist in the wild. A 2018 study found that Bornean orangutans declined by 148,500 individuals from 1999 to 2015. Fewer than 800 Tapanuli orangutans are estimated to still exist, which puts the species among the most endangered of the great apes. Conservation centres and organisations Several organisations are working for the rescue, rehabilitation and reintroduction of orangutans. The largest of these is the Borneo Orangutan Survival (BOS) Foundation, founded by conservationist Willie Smits and which operates projects such as the Nyaru Menteng Rehabilitation Program founded by conservationist Lone Drøscher Nielsen. A female orangutan was rescued from a village brothel in Kareng Pangi village, Central Kalimantan, in 2003. The orangutan was shaved and chained for sexual purposes. Since being freed, the orangutan, named Pony, has been living with the BOS. She has been re-socialised to live with other orangutans. In May 2017, the BOS rescued an albino orangutan from captivity in a remote village in Kapuas Hulu, on the island of Kalimantan in Indonesian Borneo. According to volunteers at BOS, albino orangutans are extremely rare (one in ten thousand). This is the first albino orangutan the organisation has seen in 25 years of activity. Other major conservation centres in Indonesia include those at Tanjung Puting National Park, Sebangau National Park, Gunung Palung National Park and Bukit Baka Bukit Raya National Park in Borneo and the Gunung Leuser National Park and Bukit Lawang in Sumatra. In Malaysia, conservation areas include Semenggoh Wildlife Centre and Matang Wildlife Centre also in Sarawak, and the Sepilok Orang Utan Sanctuary in Sabah. Major conservation centres headquartered outside the orangutans' home countries include Frankfurt Zoological Society, Orangutan Foundation International, which was founded by Galdikas, and the Australian Orangutan Project. Conservation organisations such as the Orangutan Land Trust work with the palm oil industry to improve sustainability and encourages the industry to establish conservation areas for orangutans.
Biology and health sciences
Primates
null
22444
https://en.wikipedia.org/wiki/Orbital%20resonance
Orbital resonance
In celestial mechanics, orbital resonance occurs when orbiting bodies exert regular, periodic gravitational influence on each other, usually because their orbital periods are related by a ratio of small integers. Most commonly, this relationship is found between a pair of objects (binary resonance). The physical principle behind orbital resonance is similar in concept to pushing a child on a swing, whereby the orbit and the swing both have a natural frequency, and the body doing the "pushing" will act in periodic repetition to have a cumulative effect on the motion. Orbital resonances greatly enhance the mutual gravitational influence of the bodies (i.e., their ability to alter or constrain each other's orbits). In most cases, this results in an unstable interaction, in which the bodies exchange momentum and shift orbits until the resonance no longer exists. Under some circumstances, a resonant system can be self-correcting and thus stable. Examples are the 1:2:4 resonance of Jupiter's moons Ganymede, Europa and Io, and the 2:3 resonance between Neptune and Pluto. Unstable resonances with Saturn's inner moons give rise to gaps in the rings of Saturn. The special case of 1:1 resonance between bodies with similar orbital radii causes large planetary system bodies to eject most other bodies sharing their orbits; this is part of the much more extensive process of clearing the neighbourhood, an effect that is used in the current definition of a planet. A binary resonance ratio in this article should be interpreted as the ratio of number of orbits completed in the same time interval, rather than as the ratio of orbital periods, which would be the inverse ratio. Thus, the 2:3 ratio above means that Pluto completes two orbits in the time it takes Neptune to complete three. In the case of resonance relationships among three or more bodies, either type of ratio may be used (whereby the smallest whole-integer ratio sequences are not necessarily reversals of each other), and the type of ratio will be specified. History Since the discovery of Newton's law of universal gravitation in the 17th century, the stability of the Solar System has preoccupied many mathematicians, starting with Pierre-Simon Laplace. The stable orbits that arise in a two-body approximation ignore the influence of other bodies. The effect of these added interactions on the stability of the Solar System is very small, but at first it was not known whether they might add up over longer periods to significantly change the orbital parameters and lead to a completely different configuration, or whether some other stabilising effects might maintain the configuration of the orbits of the planets. It was Laplace who found the first answers explaining the linked orbits of the Galilean moons (see below). Before Newton, there was also consideration of ratios and proportions in orbital motions, in what was called "the music of the spheres", or musica universalis. The article on resonant interactions describes resonance in the general modern setting. A primary result from the study of dynamical systems is the discovery and description of a highly simplified model of mode-locking; this is an oscillator that receives periodic kicks via a weak coupling to some driving motor. The analog here would be that a more massive body provides a periodic gravitational kick to a smaller body, as it passes by. The mode-locking regions are named Arnold tongues. Types of resonance In general, an orbital resonance may involve one or any combination of the orbit parameters (e.g. eccentricity versus semimajor axis, or eccentricity versus inclination). act on any time scale from short term, commensurable with the orbit periods, to secular, measured in 104 to 106 years. lead to either long-term stabilization of the orbits or be the cause of their destabilization. A mean-motion orbital resonance occurs when two bodies have periods of revolution that are a simple integer ratio of each other. It does not depend only on the existence of such a ratio, and more precisely the ratio of periods is not exactly a rational number, even averaged over a long period. For example, in the case of Pluto and Neptune (see below), the true equation says that the average rate of change of is exactly zero, where is the longitude of Pluto, is the longitude of Neptune, and is the longitude of Pluto's perihelion. Since the rate of motion of the latter is about degrees per year, the ratio of periods is actually 1.503 in the long term. Depending on the details, mean-motion orbital resonance can either stabilize or destabilize the orbit. Stabilization may occur when the two bodies move in such a synchronised fashion that they never closely approach. For instance: The orbits of Pluto and the plutinos are stable, despite crossing that of the much larger Neptune, because they are in a 2:3 resonance with it. The resonance ensures that, when they approach perihelion and Neptune's orbit, Neptune is consistently distant (averaging a quarter of its orbit away). Other (much more numerous) Neptune-crossing bodies that were not in resonance were ejected from that region by strong perturbations due to Neptune. There are also smaller but significant groups of resonant trans-Neptunian objects occupying the 1:1 (Neptune trojans), 3:5, 4:7, 1:2 (twotinos) and 2:5 resonances, among others, with respect to Neptune. In the asteroid belt beyond 3.5 AU from the Sun, the 3:2, 4:3 and 1:1 resonances with Jupiter are populated by clumps of asteroids (the Hilda family, the few Thule asteroids, and the numerous Trojan asteroids, respectively). Orbital resonances can also destabilize one of the orbits. This process can be exploited to find energy-efficient ways of deorbiting spacecraft. For small bodies, destabilization is actually far more likely. For instance: In the asteroid belt within 3.5 AU from the Sun, the major mean-motion resonances with Jupiter are locations of gaps in the asteroid distribution, the Kirkwood gaps (most notably at the 4:1, 3:1, 5:2, 7:3 and 2:1 resonances). Asteroids have been ejected from these almost empty lanes by repeated perturbations. However, there are still populations of asteroids temporarily present in or near these resonances. For example, asteroids of the Alinda family are in or close to the 3:1 resonance, with their orbital eccentricity steadily increased by interactions with Jupiter until they eventually have a close encounter with an inner planet that ejects them from the resonance. In the rings of Saturn, the Cassini Division is a gap between the inner B Ring and the outer A Ring that has been cleared by a 2:1 resonance with the moon Mimas. (More specifically, the site of the resonance is the Huygens Gap, which bounds the outer edge of the B Ring.) In the rings of Saturn, the Encke and Keeler gaps within the A Ring are cleared by 1:1 resonances with the embedded moonlets Pan and Daphnis, respectively. The A Ring's outer edge is maintained by a destabilizing 7:6 resonance with the moon Janus. Most bodies that are in resonance orbit in the same direction; however, the retrograde asteroid 514107 Kaʻepaokaʻawela appears to be in a stable (for a period of at least a million years) 1:−1 resonance with Jupiter. In addition, a few retrograde damocloids have been found that are temporarily captured in mean-motion resonance with Jupiter or Saturn. Such orbital interactions are weaker than the corresponding interactions between bodies orbiting in the same direction. The trans-Neptunian object has an orbital inclination of 110° with respect to the planets' orbital plane and is currently in a 7:9 polar resonance with Neptune. A Laplace resonance is a three-body resonance with a 1:2:4 orbital period ratio (equivalent to a 4:2:1 ratio of orbits). The term arose because Pierre-Simon Laplace discovered that such a resonance governed the motions of Jupiter's moons Io, Europa, and Ganymede. It is now also often applied to other 3-body resonances with the same ratios, such as that between the extrasolar planets Gliese 876 c, b, and e. Three-body resonances involving other simple integer ratios have been termed "Laplace-like" or "Laplace-type". A Lindblad resonance drives spiral density waves both in galaxies (where stars are subject to forcing by the spiral arms themselves) and in Saturn's rings (where ring particles are subject to forcing by Saturn's moons). A secular resonance occurs when the precession of two orbits is synchronised (usually a precession of the perihelion or ascending node). A small body in secular resonance with a much larger one (e.g. a planet) will precess at the same rate as the large body. Over long times (a million years, or so) a secular resonance will change the eccentricity and inclination of the small body. Several prominent examples of secular resonance involve Saturn. There is a near-resonance between the precession of Saturn's rotational axis and that of Neptune's orbital axis (both of which have periods of about 1.87 million years), which has been identified as the likely source of Saturn's large axial tilt (26.7°). Initially, Saturn probably had a tilt closer to that of Jupiter (3.1°). The gradual depletion of the Kuiper belt would have decreased the precession rate of Neptune's orbit; eventually, the frequencies matched, and Saturn's axial precession was captured into a spin-orbit resonance, leading to an increase in Saturn's obliquity. (The angular momentum of Neptune's orbit is 104 times that of Saturn's rotation rate, and thus dominates the interaction.) However, it seems that the resonance no longer exists. Detailed analysis of data from the Cassini spacecraft gives a value of the moment of inertia of Saturn that is just outside the range for the resonance to exist, meaning that the spin axis does not stay in phase with Neptune's orbital inclination in the long term, as it apparently did in the past. One theory for why the resonance came to an end is that there was another moon around Saturn whose orbit destabilized about 100 million years ago, perturbing Saturn. The perihelion secular resonance between asteroids and Saturn (ν6 = g − g6) helps shape the asteroid belt (the subscript "6" identifies Saturn as the sixth planet from the Sun). Asteroids which approach it have their eccentricity slowly increased until they become Mars-crossers, at which point they are usually ejected from the asteroid belt by a close pass to Mars. This resonance forms the inner and "side" boundaries of the asteroid belt around 2 AU, and at inclinations of about 20°. Numerical simulations have suggested that the eventual formation of a perihelion secular resonance between Mercury and Jupiter (g1 = g5) has the potential to greatly increase Mercury's eccentricity and possibly destabilize the inner Solar System several billion years from now. The Titan Ringlet within Saturn's C Ring represents another type of resonance in which the rate of apsidal precession of one orbit exactly matches the speed of revolution of another. The outer end of this eccentric ringlet always points towards Saturn's major moon Titan. A Kozai resonance occurs when the inclination and eccentricity of a perturbed orbit oscillate synchronously (increasing eccentricity while decreasing inclination and vice versa). This resonance applies only to bodies on highly inclined orbits; as a consequence, such orbits tend to be unstable, since the growing eccentricity would result in small pericenters, typically leading to a collision or (for large moons) destruction by tidal forces. In an example of another type of resonance involving orbital eccentricity, the eccentricities of Ganymede and Callisto vary with a common period of 181 years, although with opposite phases. Mean-motion resonances in the Solar System There are only a few known mean-motion resonances (MMR) in the Solar System involving planets, dwarf planets or larger satellites (a much greater number involve asteroids, planetary rings, moonlets and smaller Kuiper belt objects, including many possible dwarf planets). 2:3 Pluto–Neptune (also and other plutinos) 2:4 Tethys–Mimas (Saturn's moons). Not simplified, because the libration of the nodes must be taken into account. 1:2 Dione–Enceladus (Saturn's moons) 3:4 Hyperion–Titan (Saturn's moons) 1:2:4 Ganymede–Europa–Io (Jupiter's moons, ratio of orbits). Additionally, Haumea is thought to be in a 7:12 resonance with Neptune, and is thought to be in a 3:10 resonance with Neptune. The simple integer ratios between periods hide more complex relations: the point of conjunction can oscillate (librate) around an equilibrium point defined by the resonance. given non-zero eccentricities, the nodes or periapsides can drift (a resonance related, short period, not secular precession). As illustration of the latter, consider the well-known 2:1 resonance of Io-Europa. If the orbiting periods were in this relation, the mean motions (inverse of periods, often expressed in degrees per day) would satisfy the following Substituting the data (from Wikipedia) one will get −0.7395° day−1, a value substantially different from zero. Actually, the resonance perfect, but it involves also the precession of perijove (the point closest to Jupiter), . The correct equation (part of the Laplace equations) is: In other words, the mean motion of Io is indeed double of that of Europa taking into account the precession of the perijove. An observer sitting on the (drifting) perijove will see the moons coming into conjunction in the same place (elongation). The other pairs listed above satisfy the same type of equation with the exception of Mimas-Tethys resonance. In this case, the resonance satisfies the equation The point of conjunctions librates around the midpoint between the nodes of the two moons. Laplace resonance The Laplace resonance involving Io–Europa–Ganymede includes the following relation locking the orbital phase of the moons: where are mean longitudes of the moons (the second equals sign ignores libration). This relation makes a triple conjunction impossible. (A Laplace resonance in the Gliese 876 system, in contrast, is associated with one triple conjunction per orbit of the outermost planet, ignoring libration.) The graph illustrates the positions of the moons after 1, 2, and 3 Io periods. librates about 180° with an amplitude of 0.03°. Another "Laplace-like" resonance involves the moons Styx, Nix, and Hydra of Pluto: This reflects orbital periods for Styx, Nix, and Hydra, respectively, that are close to a ratio of 18:22:33 (or, in terms of the near resonances with Charon's period, 3+3/11:4:6; see below); the respective ratio of orbits is 11:9:6. Based on the ratios of synodic periods, there are 5 conjunctions of Styx and Hydra and 3 conjunctions of Nix and Hydra for every 2 conjunctions of Styx and Nix. As with the Galilean satellite resonance, triple conjunctions are forbidden. librates about 180° with an amplitude of at least 10°. Plutino resonances The dwarf planet Pluto is following an orbit trapped in a web of resonances with Neptune. The resonances include: A mean-motion resonance of 2:3 The resonance of the perihelion (libration around 90°), keeping the perihelion above the ecliptic The resonance of the longitude of the perihelion in relation to that of Neptune One consequence of these resonances is that a separation of at least 30 AU is maintained when Pluto crosses Neptune's orbit. The minimum separation between the two bodies overall is 17 AU, while the minimum separation between Pluto and Uranus is just 11 AU (see Pluto's orbit for detailed explanation and graphs). The next largest body in a similar 2:3 resonance with Neptune, called a plutino, is the probable dwarf planet Orcus. Orcus has an orbit similar in inclination and eccentricity to Pluto's. However, the two are constrained by their mutual resonance with Neptune to always be in opposite phases of their orbits; Orcus is thus sometimes described as the "anti-Pluto". Naiad:Thalassa 73:69 resonance Neptune's innermost moon, Naiad, is in a 73:69 fourth-order resonance with the next outward moon, Thalassa. As it orbits Neptune, the more inclined Naiad successively passes Thalassa twice from above and then twice from below, in a cycle that repeats every ~21.5 Earth days. The two moons are about 3540 km apart when they pass each other. Although their orbital radii differ by only 1850 km, Naiad swings ~2800 km above or below Thalassa's orbital plane at closest approach. As is common, this resonance stabilizes the orbits by maximizing separation at conjunction, but it is unusual for the role played by orbital inclination in facilitating this avoidance in a case where eccentricities are minimal. Mean-motion resonances among extrasolar planets While most extrasolar planetary systems discovered have not been found to have planets in mean-motion resonances, chains of up to five resonant planets and up to seven at least near resonant planets have been uncovered. Simulations have shown that during planetary system formation, the appearance of resonant chains of planetary embryos is favored by the presence of the primordial gas disc. Once that gas dissipates, 90–95% of those chains must then become unstable to match the low frequency of resonant chains observed. As mentioned above, Gliese 876 e, b and c are in a Laplace resonance, with a 4:2:1 ratio of periods (124.3, 61.1 and 30.0 days). In this case, librates with an amplitude of 40° ± 13° and the resonance follows the time-averaged relation: Kepler-223 has four planets in a resonance with an 8:6:4:3 orbit ratio, and a 3:4:6:8 ratio of periods (7.3845, 9.8456, 14.7887 and 19.7257 days). This represents the first confirmed 4-body orbital resonance. The librations within this system are such that close encounters between two planets occur only when the other planets are in distant parts of their orbits. Simulations indicate that this system of resonances must have formed via planetary migration. Kepler-80 d, e, b, c and g have periods in a ~ 1.000: 1.512: 2.296: 3.100: 4.767 ratio (3.0722, 4.6449, 7.0525, 9.5236 and 14.6456 days). However, in a frame of reference that rotates with the conjunctions, this reduces to a period ratio of 4:6:9:12:18 (an orbit ratio of 9:6:4:3:2). Conjunctions of d and e, e and b, b and c, and c and g occur at relative intervals of 2:3:6:6 (9.07, 13.61 and 27.21 days) in a pattern that repeats about every 190.5 days (seven full cycles in the rotating frame) in the inertial or nonrotating frame (equivalent to a 62:41:27:20:13 orbit ratio resonance in the nonrotating frame, because the conjunctions circulate in the direction opposite orbital motion). Librations of possible three-body resonances have amplitudes of only about 3 degrees, and modeling indicates the resonant system is stable to perturbations. Triple conjunctions do not occur. TOI-178 has 6 confirmed planets, of which the outer 5 planets form a similar resonant chain in a rotating frame of reference, which can be expressed as 2:4:6:9:12 in period ratios, or as 18:9:6:4:3 in orbit ratios. In addition, the innermost planet b with period of 1.91d orbits close to where it would also be part of the same Laplace resonance chain, as a 3:5 resonance with the planet c would be fulfilled at period of ~1.95d, implying that it might have evolved there but pulled out of resonance, possibly by tidal forces. TRAPPIST-1's seven approximately Earth-sized planets are in a chain of near resonances (the longest such chain known), having an orbit ratio of approximately 24, 15, 9, 6, 4, 3 and 2, or nearest-neighbor period ratios (proceeding outward) of about 8/5, 5/3, 3/2, 3/2, 4/3 and 3/2 (1.603, 1.672, 1.506, 1.509, 1.342 and 1.519). They are also configured such that each triple of adjacent planets is in a Laplace resonance (i.e., b, c and d in one such Laplace configuration; c, d and e in another, etc.). The resonant configuration is expected to be stable on a time scale of billions of years, assuming it arose during planetary migration. A musical interpretation of the resonance has been provided. Kepler-29 has a pair of planets in a 7:9 resonance (ratio of 1/1.28587). Kepler-36 has a pair of planets close to a 6:7 resonance. Kepler-37 d, c and b are within one percent of a resonance with an 8:15:24 orbit ratio and a 15:8:5 ratio of periods (39.792187, 21.301886 and 13.367308 days). Of Kepler-90's eight known planets, the period ratios b:c, c:i and i:d are close to 4:5, 3:5 and 1:4, respectively (4:4.977, 3:4.97 and 1:4.13) and d, e, f, g and h are close to a 2:3:4:7:11 period ratio (2: 3.078: 4.182: 7.051: 11.102; also 7: 11.021). f, g and h are also close to a 3:5:8 period ratio (3: 5.058: 7.964). Relevant to systems like this and that of Kepler-36, calculations suggest that the presence of an outer gas giant planet facilitates the formation of closely packed resonances among inner super-Earths. HD 41248 has a pair of super-Earths within 0.3% of a 5:7 resonance (ratio of 1/1.39718). K2-138 has 5 confirmed planets in an unbroken near-3:2 resonance chain (with periods of 2.353, 3.560, 5.405, 8.261 and 12.758 days). The system was discovered in the citizen science project Exoplanet Explorers, using K2 data. K2-138 could host co-orbital bodies (in a 1:1 mean-motion resonance). Resonant chain systems can stabilize co-orbital bodies and a dedicated analysis of the K2 light curve and radial-velocity from HARPS might reveal them. Follow-up observations with the Spitzer Space Telescope suggest a sixth planet continuing the 3:2 resonance chain, while leaving two gaps in the chain (its period is 41.97 days). These gaps could be filled by smaller non-transiting planets. Future observations with CHEOPS will measure transit-timing variations of the system to further analyse the mass of the planets and could potentially find other planetary bodies in the system. K2-32 has four planets in a near 1:2:5:7 resonance (with periods of 4.34, 8.99, 20.66 and 31.71 days). Planet e has a radius almost identical to that of the Earth. The other planets have a size between Neptune and Saturn. V1298 Tauri has four confirmed planets of which planets c, d and b are near a 1:2:3 resonance (with periods of 8.25, 12.40 and 24.14 days). Planet e only shows a single transit in the K2 light curve and has a period larger than 36 days. Planet e might be in a low-order resonance (of 2:3, 3:5, 1:2, or 1:3) with planet b. The system is very young (23±4 Myr) and might be a precursor of a compact multiplanet system. The 2:3 resonance suggests that some close-in planets may either form in resonances or evolve into them on timescales of less than 10 Myr. The planets in the system have a size between Neptune and Saturn. Only planet b has a size similar to Jupiter. HD 158259 contains four planets in a 3:2 near resonance chain (with periods of 3.432, 5.198, 7.954 and 12.03 days, or period ratios of 1.51, 1.53 and 1.51, respectively), with a possible fifth planet also near a 3:2 resonance (with a period of 17.4 days). The exoplanets were found with the SOPHIE échelle spectrograph, using the radial velocity method. Kepler-1649 contains two Earth-size planets close to a 9:4 resonance (with periods of 19.53527 and 8.689099 days, or a period ratio of 2.24825), including one ("c") in the habitable zone. An undetected planet with a 13.0-day period would create a 3:2 resonance chain. Kepler-88 has a pair of inner planets close to a 1:2 resonance (period ratio of 2.0396), with a mass ratio of ~22.5, producing very large transit timing variations of ~0.5 days for the innermost planet. There is a yet more massive outer planet in a ~1400 day orbit. HD 110067 has six known planets, in a 54:36:24:16:12:9 resonance ratio. Cases of extrasolar planets close to a 1:2 mean-motion resonance are fairly common. Sixteen percent of systems found by the transit method are reported to have an example of this (with period ratios in the range 1.83–2.18), as well as one sixth of planetary systems characterized by Doppler spectroscopy (with in this case a narrower period ratio range). Due to incomplete knowledge of the systems, the actual proportions are likely to be higher. Overall, about a third of radial velocity characterized systems appear to have a pair of planets close to a commensurability. It is much more common for pairs of planets to have orbital period ratios a few percent larger than a mean-motion resonance ratio than a few percent smaller (particularly in the case of first order resonances, in which the integers in the ratio differ by one). This was predicted to be true in cases where tidal interactions with the star are significant. Coincidental 'near' ratios of mean motion A number of near-integer-ratio relationships between the orbital frequencies of the planets or major moons are sometimes pointed out (see list below). However, these have no dynamical significance because there is no appropriate precession of perihelion or other libration to make the resonance perfect (see the detailed discussion in the section above). Such near resonances are dynamically insignificant even if the mismatch is quite small because (unlike a true resonance), after each cycle the relative position of the bodies shifts. When averaged over astronomically short timescales, their relative position is random, just like bodies that are nowhere near resonance. For example, consider the orbits of Earth and Venus, which arrive at almost the same configuration after 8 Earth orbits and 13 Venus orbits. The actual ratio is 0.61518624, which is only 0.032% away from exactly 8:13. The mismatch after 8 years is only 1.5° of Venus' orbital movement. Still, this is enough that Venus and Earth find themselves in the opposite relative orientation to the original every 120 such cycles, which is 960 years. Therefore, on timescales of thousands of years or more (still tiny by astronomical standards), their relative position is effectively random. The presence of a near resonance may reflect that a perfect resonance existed in the past, or that the system is evolving towards one in the future. Some orbital frequency coincidences include: The least probable orbital correlation in the list – meaning the relationship that seems most likely to have not just be by random chance – is that between Io and Metis, followed by those between Rosalind and Cordelia, Pallas and Ceres, Jupiter and Pallas, Callisto and Ganymede, and Hydra and Charon, respectively. Possible past mean-motion resonances A past resonance between Jupiter and Saturn may have played a dramatic role in early Solar System history. A 2004 computer model by Alessandro Morbidelli of the Observatoire de la Côte d'Azur in Nice suggested the formation of a 1:2 resonance between Jupiter and Saturn due to interactions with planetesimals that caused them to migrate inward and outward, respectively. In the model, this created a gravitational push that propelled both Uranus and Neptune into higher orbits, and in some scenarios caused them to switch places, which would have doubled Neptune's distance from the Sun. The resultant expulsion of objects from the proto-Kuiper belt as Neptune moved outwards could explain the Late Heavy Bombardment 600 million years after the Solar System's formation and the origin of Jupiter's Trojan asteroids. An outward migration of Neptune could also explain the current occupancy of some of its resonances (particularly the 2:5 resonance) within the Kuiper belt. While Saturn's mid-sized moons Dione and Tethys are not close to an exact resonance now, they may have been in a 2:3 resonance early in the Solar System's history. This would have led to orbital eccentricity and tidal heating that may have warmed Tethys' interior enough to form a subsurface ocean. Subsequent freezing of the ocean after the moons escaped from the resonance may have generated the extensional stresses that created the enormous graben system of Ithaca Chasma on Tethys. The satellite system of Uranus is notably different from those of Jupiter and Saturn in that it lacks precise resonances among the larger moons, while the majority of the larger moons of Jupiter (3 of the 4 largest) and of Saturn (6 of the 8 largest) are in mean-motion resonances. In all three satellite systems, moons were likely captured into mean-motion resonances in the past as their orbits shifted due to tidal dissipation, a process by which satellites gain orbital energy at the expense of the primary's rotational energy, affecting inner moons disproportionately. In the Uranian system, however, due to the planet's lesser degree of oblateness, and the larger relative size of its satellites, escape from a mean-motion resonance is much easier. Lower oblateness of the primary alters its gravitational field in such a way that different possible resonances are spaced more closely together. A larger relative satellite size increases the strength of their interactions. Both factors lead to more chaotic orbital behavior at or near mean-motion resonances. Escape from a resonance may be associated with capture into a secondary resonance, and/or tidal evolution-driven increases in orbital eccentricity or inclination. Mean-motion resonances that probably once existed in the Uranus System include (3:5) Ariel-Miranda, (1:3) Umbriel-Miranda, (3:5) Umbriel-Ariel, and (1:4) Titania-Ariel. Evidence for such past resonances includes the relatively high eccentricities of the orbits of Uranus' inner satellites, and the anomalously high orbital inclination of Miranda. High past orbital eccentricities associated with the (1:3) Umbriel-Miranda and (1:4) Titania-Ariel resonances may have led to tidal heating of the interiors of Miranda and Ariel, respectively. Miranda probably escaped from its resonance with Umbriel via a secondary resonance, and the mechanism of this escape is believed to explain why its orbital inclination is more than 10 times those of the other regular Uranian moons (see Uranus' natural satellites). Similar to the case of Miranda, the present inclinations of Jupiter's moonlets Amalthea and Thebe are thought to be indications of past passage through the 3:1 and 4:2 resonances with Io, respectively. Neptune's regular moons Proteus and Larissa are thought to have passed through a 1:2 resonance a few hundred million years ago; the moons have drifted away from each other since then because Proteus is outside a synchronous orbit and Larissa is within one. Passage through the resonance is thought to have excited both moons' eccentricities to a degree that has not since been entirely damped out. In the case of Pluto's satellites, it has been proposed that the present near resonances are relics of a previous precise resonance that was disrupted by tidal damping of the eccentricity of Charon's orbit (see Pluto's natural satellites for details). The near resonances may be maintained by a 15% local fluctuation in the Pluto-Charon gravitational field. Thus, these near resonances may not be coincidental. The smaller inner moon of the dwarf planet Haumea, Namaka, is one tenth the mass of the larger outer moon, Hiʻiaka. Namaka revolves around Haumea in 18 days in an eccentric, non-Keplerian orbit, and as of 2008 is inclined 13° from Hiʻiaka. Over the timescale of the system, it should have been tidally damped into a more circular orbit. It appears that it has been disturbed by resonances with the more massive Hiʻiaka, due to converging orbits as it moved outward from Haumea because of tidal dissipation. The moons may have been caught in and then escaped from orbital resonance several times. They probably passed through the 3:1 resonance relatively recently, and currently are in or at least close to an 8:3 resonance. Namaka's orbit is strongly perturbed, with a current precession of about −6.5° per year.
Physical sciences
Orbital mechanics
Astronomy
22458
https://en.wikipedia.org/wiki/Octahedron
Octahedron
In geometry, an octahedron (: octahedra or octahedrons) is a polyhedron with eight faces. One special case is the regular octahedron, a Platonic solid composed of eight equilateral triangles, four of which meet at each vertex. Regular octahedra occur in nature as crystal structures. Many types of irregular octahedra also exist, including both convex and non-convex shapes. A regular octahedron is the three-dimensional case of the more general concept of a cross-polytope. Regular octahedron A regular octahedron is an octahedron that is a regular polyhedron. All the faces of a regular octahedron are equilateral triangles of the same size, and exactly four triangles meet at each vertex. A regular octahedron is convex, meaning that for any two points within it, the line segment connecting them lies entirely within it. It is one of the eight convex deltahedra because all of the faces are equilateral triangles. It is a composite polyhedron made by attaching two equilateral square pyramids. Its dual polyhedron is the cube, and they have the same three-dimensional symmetry groups, the octahedral symmetry . As a Platonic solid The regular octahedron is one of the Platonic solids, a set of polyhedrons whose faces are congruent regular polygons and the same number of faces meet at each vertex. This ancient set of polyhedrons was named after Plato who, in his Timaeus dialogue, related these solids to nature. One of them, the regular octahedron, represented the classical element of wind. Following its attribution with nature by Plato, Johannes Kepler in his Harmonices Mundi sketched each of the Platonic solids. In his Mysterium Cosmographicum, Kepler also proposed the Solar System by using the Platonic solids setting into another one and separating them with six spheres resembling the six planets. The ordered solids started from the innermost to the outermost: regular octahedron, regular icosahedron, regular dodecahedron, regular tetrahedron, and cube. As a square bipyramid Many octahedra of interest are square bipyramids. A square bipyramid is a bipyramid constructed by attaching two square pyramids base-to-base. These pyramids cover their square bases, so the resulting polyhedron has eight triangular faces. A square bipyramid is said to be right if the square pyramids are symmetrically regular and both of their apices are on the line passing through the base's center; otherwise, it is oblique. The resulting bipyramid has three-dimensional point group of dihedral group of sixteen: the appearance is symmetrical by rotating around the axis of symmetry that passing through apices and base's center vertically, and it has mirror symmetry relative to any bisector of the base; it is also symmetrical by reflecting it across a horizontal plane. Therefore, this square bipyramid is face-transitive or isohedral. If the edges of a square bipyramid are all equal in length, then that square bipyramid is a regular octahedron. Metric properties and Cartesian coordinates The surface area of a regular octahedron can be ascertained by summing all of its eight equilateral triangles, whereas its volume is twice the volume of a square pyramid; if the edge length is , The radius of a circumscribed sphere (one that touches the octahedron at all vertices), the radius of an inscribed sphere (one that tangent to each of the octahedron's faces), and the radius of a midsphere (one that touches the middle of each edge), are: The dihedral angle of a regular octahedron between two adjacent triangular faces is 109.47°. This can be obtained from the dihedral angle of an equilateral square pyramid: its dihedral angle between two adjacent triangular faces is the dihedral angle of an equilateral square pyramid between two adjacent triangular faces, and its dihedral angle between two adjacent triangular faces on the edge in which two equilateral square pyramids are attached is twice the dihedral angle of an equilateral square pyramid between its triangular face and its square base. An octahedron with edge length can be placed with its center at the origin and its vertices on the coordinate axes; the Cartesian coordinates of the vertices are: In three dimensional space, the octahedron with center coordinates and radius is the set of all points such that . Graph The skeleton of a regular octahedron can be represented as a graph according to Steinitz's theorem, provided the graph is planar—its edges of a graph are connected to every vertex without crossing other edges—and 3-connected graph—its edges remain connected whenever two of more three vertices of a graph are removed. Its graph called the octahedral graph, a Platonic graph. The octahedral graph can be considered as complete tripartite graph , a graph partitioned into three independent sets each consisting of two opposite vertices. More generally, it is a Turán graph . The octahedral graph is 4-connected, meaning that it takes the removal of four vertices to disconnect the remaining vertices. It is one of only four 4-connected simplicial well-covered polyhedra, meaning that all of the maximal independent sets of its vertices have the same size. The other three polyhedra with this property are the pentagonal dipyramid, the snub disphenoid, and an irregular polyhedron with 12 vertices and 20 triangular faces. Related figures The interior of the compound of two dual tetrahedra is an octahedron, and this compound—called the stella octangula—is its first and only stellation. Correspondingly, a regular octahedron is the result of cutting off from a regular tetrahedron, four regular tetrahedra of half the linear size (i.e. rectifying the tetrahedron). The vertices of the octahedron lie at the midpoints of the edges of the tetrahedron, and in this sense it relates to the tetrahedron in the same way that the cuboctahedron and icosidodecahedron relate to the other Platonic solids. One can also divide the edges of an octahedron in the ratio of the golden mean to define the vertices of a regular icosahedron. This is done by first placing vectors along the octahedron's edges such that each face is bounded by a cycle, then similarly partitioning each edge into the golden mean along the direction of its vector. Five octahedra define any given icosahedron in this fashion, and together they define a regular compound. A regular icosahedron produced this way is called a snub octahedron. The regular octahedron can be considered as the antiprism, a prism like polyhedron in which lateral faces are replaced by alternating equilateral triangles. It is also called trigonal antiprism. Therefore, it has the property of quasiregular, a polyhedron in which two different polygonal faces are alternating and meet at a vertex. Octahedra and tetrahedra can be alternated to form a vertex, edge, and face-uniform tessellation of space. This and the regular tessellation of cubes are the only such uniform honeycombs in 3-dimensional space. The uniform tetrahemihexahedron is a tetrahedral symmetry faceting of the regular octahedron, sharing edge and vertex arrangement. It has four of the triangular faces, and 3 central squares. A regular octahedron is a 3-ball in the Manhattan () metric. Characteristic orthoscheme Like all regular convex polytopes, the octahedron can be dissected into an integral number of disjoint orthoschemes, all of the same shape characteristic of the polytope. A polytope's characteristic orthoscheme is a fundamental property because the polytope is generated by reflections in the facets of its orthoscheme. The orthoscheme occurs in two chiral forms which are mirror images of each other. The characteristic orthoscheme of a regular polyhedron is a quadrirectangular irregular tetrahedron. The faces of the octahedron's characteristic tetrahedron lie in the octahedron's mirror planes of symmetry. The octahedron is unique among the Platonic solids in having an even number of faces meeting at each vertex. Consequently, it is the only member of that group to possess, among its mirror planes, some that do not pass through any of its faces. The octahedron's symmetry group is denoted B3. The octahedron and its dual polytope, the cube, have the same symmetry group but different characteristic tetrahedra. The characteristic tetrahedron of the regular octahedron can be found by a canonical dissection of the regular octahedron which subdivides it into 48 of these characteristic orthoschemes surrounding the octahedron's center. Three left-handed orthoschemes and three right-handed orthoschemes meet in each of the octahedron's eight faces, the six orthoschemes collectively forming a trirectangular tetrahedron: a triangular pyramid with the octahedron face as its equilateral base, and its cube-cornered apex at the center of the octahedron. If the octahedron has edge length 𝒍 = 2, its characteristic tetrahedron's six edges have lengths , , around its exterior right-triangle face (the edges opposite the characteristic angles 𝟀, 𝝉, 𝟁), plus , , (edges that are the characteristic radii of the octahedron). The 3-edge path along orthogonal edges of the orthoscheme is , , , first from an octahedron vertex to an octahedron edge center, then turning 90° to an octahedron face center, then turning 90° to the octahedron center. The orthoscheme has four dissimilar right triangle faces. The exterior face is a 90-60-30 triangle which is one-sixth of an octahedron face. The three faces interior to the octahedron are: a 45-90-45 triangle with edges , , , a right triangle with edges , , , and a right triangle with edges , , . Uniform colorings and symmetry There are 3 uniform colorings of the octahedron, named by the triangular face colors going around each vertex: 1212, 1112, 1111. The octahedron's symmetry group is Oh, of order 48, the three dimensional hyperoctahedral group. This group's subgroups include D3d (order 12), the symmetry group of a triangular antiprism; D4h (order 16), the symmetry group of a square bipyramid; and Td (order 24), the symmetry group of a rectified tetrahedron. These symmetries can be emphasized by different colorings of the faces. Other types of octahedra An octahedron can be any polyhedron with eight faces. In a previous example, the regular octahedron has 6 vertices and 12 edges, the minimum for an octahedron; irregular octahedra may have as many as 12 vertices and 18 edges. There are 257 topologically distinct convex octahedra, excluding mirror images. More specifically there are 2, 11, 42, 74, 76, 38, 14 for octahedra with 6 to 12 vertices respectively. (Two polyhedra are "topologically distinct" if they have intrinsically different arrangements of faces and vertices, such that it is impossible to distort one into the other simply by changing the lengths of edges or the angles between edges or faces.) Some of the polyhedrons do have eight faces aside from being square bipyramids in the following: Hexagonal prism: Two faces are parallel regular hexagons; six squares link corresponding pairs of hexagon edges. Heptagonal pyramid: One face is a heptagon (usually regular), and the remaining seven faces are triangles (usually isosceles). All triangular faces can't be equilateral. Truncated tetrahedron: The four faces from the tetrahedron are truncated to become regular hexagons, and there are four more equilateral triangle faces where each tetrahedron vertex was truncated. Tetragonal trapezohedron: The eight faces are congruent kites. Gyrobifastigium: Two uniform triangular prisms glued over one of their square sides so that no triangle shares an edge with another triangle (Johnson solid 26). Truncated triangular trapezohedron, also called Dürer's solid: Obtained by truncating two opposite corners of a cube or rhombohedron, this has six pentagon faces and two triangle faces. Octagonal hosohedron: degenerate in Euclidean space, but can be realized spherically. The following polyhedra are combinatorially equivalent to the regular octahedron. They all have six vertices, eight triangular faces, and twelve edges that correspond one-for-one with the features of it: Triangular antiprisms: Two faces are equilateral, lie on parallel planes, and have a common axis of symmetry. The other six triangles are isosceles. The regular octahedron is a special case in which the six lateral triangles are also equilateral. Tetragonal bipyramids, in which at least one of the equatorial quadrilaterals lies on a plane. The regular octahedron is a special case in which all three quadrilaterals are planar squares. Schönhardt polyhedron, a non-convex polyhedron that cannot be partitioned into tetrahedra without introducing new vertices. Bricard octahedron, a non-convex self-crossing flexible polyhedron Octahedra in the physical world Octahedra in nature Natural crystals of diamond, alum or fluorite are commonly octahedral, as the space-filling tetrahedral-octahedral honeycomb. The plates of kamacite alloy in octahedrite meteorites are arranged paralleling the eight faces of an octahedron. Many metal ions coordinate six ligands in an octahedral or distorted octahedral configuration. Widmanstätten patterns in nickel-iron crystals Octahedra in art and culture Especially in roleplaying games, this solid is known as a "d8", one of the more common polyhedral dice. If each edge of an octahedron is replaced by a one-ohm resistor, the resistance between opposite vertices is ohm, and that between adjacent vertices ohm. Six musical notes can be arranged on the vertices of an octahedron in such a way that each edge represents a consonant dyad and each face represents a consonant triad; see hexany. Tetrahedral octet truss A space frame of alternating tetrahedra and half-octahedra derived from the Tetrahedral-octahedral honeycomb was invented by Buckminster Fuller in the 1950s. It is commonly regarded as the strongest building structure for resisting cantilever stresses. Related polyhedra A regular octahedron can be augmented into a tetrahedron by adding 4 tetrahedra on alternated faces. Adding tetrahedra to all 8 faces creates the stellated octahedron. The octahedron is one of a family of uniform polyhedra related to the cube. It is also one of the simplest examples of a hypersimplex, a polytope formed by certain intersections of a hypercube with a hyperplane. The octahedron is topologically related as a part of sequence of regular polyhedra with Schläfli symbols {3,n}, continuing into the hyperbolic plane. Tetratetrahedron The regular octahedron can also be considered a rectified tetrahedron – and can be called a tetratetrahedron. This can be shown by a 2-color face model. With this coloring, the octahedron has tetrahedral symmetry. Compare this truncation sequence between a tetrahedron and its dual: The above shapes may also be realized as slices orthogonal to the long diagonal of a tesseract. If this diagonal is oriented vertically with a height of 1, then the first five slices above occur at heights r, , , , and s, where r is any number in the range , and s is any number in the range . The octahedron as a tetratetrahedron exists in a sequence of symmetries of quasiregular polyhedra and tilings with vertex configurations (3.n)2, progressing from tilings of the sphere to the Euclidean plane and into the hyperbolic plane. With orbifold notation symmetry of *n32 all of these tilings are Wythoff constructions within a fundamental domain of symmetry, with generator points at the right angle corner of the domain. Trigonal antiprism As a trigonal antiprism, the octahedron is related to the hexagonal dihedral symmetry family. Other related polyhedra Truncation of two opposite vertices results in a square bifrustum. The octahedron can be generated as the case of a 3D superellipsoid with all exponent values set to 1.
Mathematics
Three-dimensional space
null
22461
https://en.wikipedia.org/wiki/Osteoporosis
Osteoporosis
Osteoporosis is a systemic skeletal disorder characterized by low bone mass, micro-architectural deterioration of bone tissue leading to more porous bone, and consequent increase in fracture risk. It is the most common reason for a broken bone among the elderly. Bones that commonly break include the vertebrae in the spine, the bones of the forearm, the wrist, and the hip. Until a broken bone occurs there are typically no symptoms. Bones may weaken to such a degree that a break may occur with minor stress or spontaneously. After the broken bone heals, some people may have chronic pain and a decreased ability to carry out normal activities. Osteoporosis may be due to lower-than-normal maximum bone mass and greater-than-normal bone loss. Bone loss increases after menopause in women due to lower levels of estrogen, and after andropause in older men due to lower levels of testosterone. Osteoporosis may also occur due to a number of diseases or treatments, including alcoholism, anorexia, hyperthyroidism, kidney disease, and after oophorectomy (surgical removal of the ovaries). Certain medications increase the rate of bone loss, including some antiseizure medications, chemotherapy, proton pump inhibitors, selective serotonin reuptake inhibitors, and glucocorticosteroids. Smoking and getting an inadequate amount of exercise are also risk factors. Osteoporosis is defined as a bone density of 2.5 standard deviations below that of a young adult. This is typically measured by dual-energy X-ray absorptiometry (DXA or DEXA). Prevention of osteoporosis includes a proper diet during childhood, hormone replacement therapy for menopausal women, and efforts to avoid medications that increase the rate of bone loss. Efforts to prevent broken bones in those with osteoporosis include a good diet, exercise, and fall prevention. Lifestyle changes such as stopping smoking and not drinking alcohol may help. Bisphosphonate medications are useful to decrease future broken bones in those with previous broken bones due to osteoporosis. In those with osteoporosis but no previous broken bones, they are less effective. They do not appear to affect the risk of death. Osteoporosis becomes more common with age. About 15% of Caucasians in their 50s and 70% of those over 80 are affected. It is more common in women than men. In the developed world, depending on the method of diagnosis, 2% to 8% of males and 9% to 38% of females are affected. Rates of disease in the developing world are unclear. About 22 million women and 5.5 million men in the European Union had osteoporosis in 2010. In the United States in 2010, about 8 million women and between 1 and 2 million men had osteoporosis. White and Asian people are at greater risk. The word "osteoporosis" is from the Greek terms for "porous bones". Signs and symptoms Osteoporosis has no symptoms and the person usually does not know that they have osteoporosis until a bone is broken. Osteoporotic fractures occur in situations where healthy people would not normally break a bone; they are therefore regarded as fragility fractures. Typical fragility fractures occur in the vertebral column, rib, hip and wrist. Examples of situations where people would not normally break a bone include a fall from standing height, normal day-to-day activities such as lifting, bending, or coughing. Fractures Fractures are a common complication of osteoporosis and can result in disability. Acute and chronic pain in the elderly is often attributed to fractures from osteoporosis and can lead to further disability and early mortality. These fractures may also be asymptomatic. The most common osteoporotic fractures are of the wrist, spine, shoulder and hip. The symptoms of a vertebral collapse ("compression fracture") are sudden back pain, often with radicular pain (shooting pain due to nerve root compression) and rarely with spinal cord compression or cauda equina syndrome. Multiple vertebral fractures lead to a stooped posture, loss of height, and chronic pain with resultant reduction in mobility. Fractures of the long bones acutely impair mobility and may require surgery. Hip fracture, in particular, usually requires prompt surgery, as serious risks are associated with it, such as deep vein thrombosis and pulmonary embolism. There is also an increased risk of mortality associated with osteoporosis-related hip fracture, with the mean average mortality rate within one year for Europe being 23.3%, for Asia 17.9%, United States 21% and Australia 24.9%. Fracture risk calculators assess the risk of fracture based upon several criteria, including bone mineral density, age, smoking, alcohol usage, weight, and gender. Recognized calculators include FRAX, the Garvan FRC calculator and QFracture as well as the open access FREM tool. The FRAX tool can also be applied in a modification adapted to routinely collected health data. The term "established osteoporosis" is used when a broken bone due to osteoporosis has occurred. Osteoporosis is a part of frailty syndrome. Risk of falls There is an increased risk of falls associated with aging. These falls can lead to skeletal damage at the wrist, spine, hip, knee, foot, and ankle. Part of the fall risk is because of impaired eyesight due to many causes, (e.g. glaucoma, macular degeneration), balance disorder, movement disorders (e.g. Parkinson's disease), dementia, and sarcopenia (age-related loss of skeletal muscle). Collapse (transient loss of postural tone with or without loss of consciousness). Causes of syncope are manifold, but may include cardiac arrhythmias (irregular heart beat), vasovagal syncope, orthostatic hypotension (abnormal drop in blood pressure on standing up), and seizures. Removal of obstacles and loose carpets in the living environment may substantially reduce falls. Those with previous falls, as well as those with gait or balance disorders, are most at risk. Complications As well as susceptibility to breaks and fractures, osteoporosis can lead to other complications. Bone fractures from osteoporosis can lead to disability and an increased risk of death after the injury in elderly people. Osteoporosis can decrease the quality of life, increase disabilities, and increase the financial costs to health care systems. Risk factors The risk of having osteoporosis includes age and sex. Risk factors include both nonmodifiable (for example, age and some medications that may be necessary to treat a different condition) and modifiable (for example, alcohol use, smoking, vitamin deficiency). In addition, osteoporosis is a recognized complication of specific diseases and disorders. Medication use is theoretically modifiable, although in many cases, the use of medication that increases osteoporosis risk may be unavoidable. Caffeine is not a risk factor for osteoporosis. Nonmodifiable The most important risk factors for osteoporosis are advanced age (in both men and women) and female sex; estrogen deficiency following menopause or surgical removal of the ovaries is correlated with a rapid reduction in bone mineral density, while in men, a decrease in testosterone levels has a comparable (but less pronounced) effect. Ethnicity: While osteoporosis occurs in people from all ethnic groups, European or Asian ancestry predisposes for osteoporosis. Heredity: Those with a family history of fracture or osteoporosis are at an increased risk; the heritability of fracture risk, as well as low bone mineral density, is relatively high, ranging from 25 to 80%. At least 30 genes are associated with the development of osteoporosis. Those who have already had a fracture are at least twice as likely to have another fracture compared to someone of the same age and sex. Build: A small stature is also a nonmodifiable risk factor associated with the development of osteoporosis. Potentially modifiable Alcohol: Alcohol intake greater than three units/day) may increase the risk of osteoporosis and people who consumed 0.5-1 drinks a day may have 1.38 times the risk compared to people who do not consume alcohol. Vitamin D deficiency: Low circulating Vitamin D is common among the elderly worldwide. Mild vitamin D insufficiency is associated with increased parathyroid hormone (PTH) production. PTH increases bone resorption, leading to bone loss. A positive association exists between serum 1,25-dihydroxycholecalciferol levels and bone mineral density, while PTH is negatively associated with bone mineral density. Tobacco smoking: Many studies have associated smoking with decreased bone health, but the mechanisms are unclear. Tobacco smoking has been proposed to inhibit the activity of osteoblasts, and is an independent risk factor for osteoporosis. Smoking also results in increased breakdown of exogenous estrogen, lower body weight and earlier menopause, all of which contribute to lower bone mineral density. Malnutrition: Nutrition has an important and complex role in maintenance of good bone. Identified risk factors include low dietary calcium or phosphorus, magnesium, zinc, boron, iron, fluoride, copper, vitamins A, K, E and C (and D where skin exposure to sunlight provides an inadequate supply). Excess sodium is a risk factor. High blood acidity may be diet-related, and is a known antagonist of bone. Imbalance of omega-6 to omega-3 polyunsaturated fats is yet another identified risk factor. A 2017 meta-analysis of published medical studies shows that higher protein diet helps slightly with lower spine density but does not show significant improvement with other bones. A 2023 meta-analysis sees no evidence for the relation between protein intake and bone health. Underweight/inactive: Bone remodeling occurs in response to physical stress, so physical inactivity can lead to significant bone loss. Weight bearing exercise can increase peak bone mass achieved in adolescence, and a highly significant correlation between bone strength and muscle strength has been determined. The incidence of osteoporosis is lower in overweight people. Endurance training: In female endurance athletes, large volumes of training can lead to decreased bone density and an increased risk of osteoporosis. This effect might be caused by intense training suppressing menstruation, producing amenorrhea, and it is part of the female athlete triad. However, for male athletes, the situation is less clear, and although some studies have reported low bone density in elite male endurance athletes, others have instead seen increased leg bone density. Heavy metals: A strong association between cadmium and lead with bone disease has been established. Low-level exposure to cadmium is associated with an increased loss of bone mineral density readily in both genders, leading to pain and increased risk of fractures, especially in the elderly and in females. Higher cadmium exposure results in osteomalacia (softening of the bone). Soft drinks: Some studies indicate soft drinks (many of which contain phosphoric acid) may increase risk of osteoporosis, at least in women. Others suggest soft drinks may displace calcium-containing drinks from the diet rather than directly causing osteoporosis. Proton pump inhibitors (such as lansoprazole, esomeprazole, and omeprazole), which decrease the production of stomach acid, are a risk factor for bone fractures if taken for two or more years, due to decreased absorption of calcium in the stomach. Medical disorders Many diseases and disorders have been associated with osteoporosis. For some, the underlying mechanism influencing the bone metabolism is straightforward, whereas for others the causes are multiple or unknown. In general, immobilization causes bone loss. For example, localized osteoporosis can occur after prolonged immobilization of a fractured limb in a cast. This is also more common in active people with a high bone turn-over (for example, athletes). Other examples include bone loss during space flight or in people who are bedridden or use wheelchairs for various reasons. Hypogonadal states can cause secondary osteoporosis. These include Turner syndrome, Klinefelter syndrome, Kallmann syndrome, anorexia nervosa, andropause, hypothalamic amenorrhea or hyperprolactinemia. In females, the effect of hypogonadism is mediated by estrogen deficiency. It can appear as early menopause (<45 years) or from prolonged premenopausal amenorrhea (>1 year). Bilateral oophorectomy (surgical removal of the ovaries) and premature ovarian failure cause deficient estrogen production. In males, testosterone deficiency is the cause (for example, andropause or after surgical removal of the testes). Endocrine disorders that can induce bone loss include Cushing's syndrome, hyperparathyroidism, hyperthyroidism, hypothyroidism, diabetes mellitus type 1 and 2, acromegaly, and adrenal insufficiency. Malnutrition, parenteral nutrition and malabsorption can lead to osteoporosis. Nutritional and gastrointestinal disorders that can predispose to osteoporosis include undiagnosed and untreated coeliac disease (both symptomatic and asymptomatic people), Crohn's disease, ulcerative colitis, cystic fibrosis, surgery (after gastrectomy, intestinal bypass surgery or bowel resection) and severe liver disease (especially primary biliary cirrhosis). People with lactose intolerance or milk allergy may develop osteoporosis due to restrictions of calcium-containing foods. Individuals with bulimia can also develop osteoporosis. Those with an otherwise adequate calcium intake can develop osteoporosis due to the inability to absorb calcium and/or vitamin D. Other micronutrients such as vitamin K or vitamin B12 deficiency may also contribute. People with rheumatologic disorders such as rheumatoid arthritis, ankylosing spondylitis, systemic lupus erythematosus and polyarticular juvenile idiopathic arthritis are at increased risk of osteoporosis, either as part of their disease or because of other risk factors (notably corticosteroid therapy). Systemic diseases such as amyloidosis and sarcoidosis can also lead to osteoporosis. Chronic kidney disease can lead to renal osteodystrophy. Hematologic disorders linked to osteoporosis are multiple myeloma and other monoclonal gammopathies, lymphoma, leukemia, mastocytosis, hemophilia, sickle-cell disease and thalassemia. Several inherited or genetic disorders have been linked to osteoporosis. These include osteogenesis imperfecta, Multicentric carpotarsal osteolysis syndrome, Multicentric Osteolysis, Nodulosis, and Arthropathy, Marfan syndrome, hemochromatosis, hypophosphatasia (for which it is often misdiagnosed), glycogen storage diseases, homocystinuria, Ehlers–Danlos syndrome, porphyria, Menkes' syndrome, epidermolysis bullosa and Gaucher's disease. People with scoliosis of unknown cause also have a higher risk of osteoporosis. Bone loss can be a feature of complex regional pain syndrome. It is also more frequent in people with Parkinson's disease and chronic obstructive pulmonary disease. People with Parkinson's disease have a higher risk of broken bones. This is related to poor balance and poor bone density. In Parkinson's disease there may be a link between the loss of dopaminergic neurons and altered calcium metabolism (and iron metabolism) causing a stiffening of the skeleton and kyphosis. Medication Certain medications have been associated with an increase in osteoporosis risk; only glucocorticosteroids and anticonvulsants are classically associated, but evidence is emerging with regard to other drugs. Steroid-induced osteoporosis (SIOP) arises due to use of glucocorticoids – analogous to Cushing's syndrome and involving mainly the axial skeleton. The synthetic glucocorticoid prescription drug prednisone is a main candidate after prolonged intake. Some professional guidelines recommend prophylaxis in patients who take the equivalent of more than 30 mg hydrocortisone (7.5 mg of prednisolone), especially when this is in excess of three months. It is recommended to use calcium or Vitamin D as prevention. Alternate day use may not prevent this complication. Barbiturates, phenytoin and some other enzyme-inducing antiepileptics – these probably accelerate the metabolism of vitamin D. L-Thyroxine over-replacement may contribute to osteoporosis, in a similar fashion as thyrotoxicosis does. This can be relevant in subclinical hypothyroidism. Several drugs induce hypogonadism, for example aromatase inhibitors used in breast cancer, methotrexate and other antimetabolite drugs, depot progesterone and gonadotropin-releasing hormone agonists. Anticoagulants – long-term use of heparin is associated with a decrease in bone density, and warfarin (and related coumarins) have been linked with an increased risk in osteoporotic fracture in long-term use. Proton pump inhibitors – these drugs inhibit the production of stomach acid; this is thought to interfere with calcium absorption. Chronic phosphate binding may also occur with aluminium-containing antacids. Thiazolidinediones (used for diabetes) – rosiglitazone and possibly pioglitazone, inhibitors of PPARγ, have been linked with an increased risk of osteoporosis and fracture. Chronic lithium therapy has been associated with osteoporosis. Pregnancy-associated osteoporosis Osteoporosis due to pregnancy and lactation is a rare condition of unknown cause. Evolutionary Age-related bone loss is common among humans due to exhibiting less dense bones than other primate species. Because of the more porous bones of humans, frequency of severe osteoporosis and osteoporosis related fractures is higher. The human vulnerability to osteoporosis is an obvious cost but it can be justified by the advantage of bipedalism inferring that this vulnerability is the byproduct of such. It has been suggested that porous bones help to absorb the increased stress that we have on two surfaces compared to our primate counterparts who have four surfaces to disperse the force. In addition, the porosity allows for more flexibility and a lighter skeleton that is easier to support. One other consideration may be that diets today have much lower amounts of calcium than the diets of other primates or the tetrapedal ancestors to humans which may lead to higher likelihood to show signs of osteoporosis. Fracture risk assessment In the absence of risk factors other than sex and age a BMD measurement using dual-energy X-ray absorptiometry (DXA) is recommended for women at age 65. For women with risk factors a clinical FRAX is advised at age 50. Mechanics Osteoporosis occurs when reduction in bone mass surpasses a critical threshold with greater susceptibility to fracturing. Fractures occur when the force acting on a bone is greater than the strength of the bone. To understand the pathology of osteoporosis and skeletal degradation, studying the mechanical properties and behavior of bone is crucial, due to the under-diagnosing of osteoporosis. Mechanical properties of a material depend on the geometry and inherent structure of the materials. Bone as a material is very complex because of its hierarchal structure in which characteristics vary across length scales. At the basic scale, bone is composed of an organic matrix of collagen type-I. Collagen type-I molecules form a composite material with hydroxyapatite to make up collagen fibrils. The hierarchal structure continuous with the fibrils being arranged into different patterns such as lamellae. The microstructure of bone then forms vascular channels, called osteons, which are surrounded by lamellae. At the subsequent scale of bones, there are different types of bone based on morphology: cortical (solid), cancellous (sponge), or trabecular (thin plates).   A basic picture of the hierarchical structure of bones is essential because the structure will translate to the mechanical behavior of bones. Previous work indicates that osteoporotic bones undergo specific structural changes that contribute to altered mechanical behavior. For instance, a study demonstrated that osteoporotic bone exhibits reduced bone volume fraction, trabecular thickness, and connectivity. In another study, osteoporosis in human cancellous bone led to 3-27% variability in the stiffness and strength compared to health bone. Additionally, bone mineral density (BMD) is a parameter used to evaluate fracture risk in bones and is used as a predictor of osteoporosis. A lower BMD value correlates to decreased bone and a higher bone fragility. Furthermore, bone diseases, such as osteoporosis, are known to alter the composition of collagen and other proteins that make up the bone matrix. These alterations in composition contribute to how bone can handle mechanical loading. Thus, osteoporosis-induced changes at the macroscopic and microscopic levels significantly impact the mechanical properties of bone, predisposing individuals to fractures even under relatively low mechanical loads. Understanding these structural alterations is vital for developing effective diagnostic and therapeutic strategies for osteoporosis. Pathogenesis The underlying mechanism in all cases of osteoporosis is an imbalance between bone resorption and bone formation. In normal bone, matrix remodeling of bone is constant; up to 10% of all bone mass may be undergoing remodeling at any point in time. The process takes place in bone multicellular units (BMUs) as first described by Frost & Thomas in 1963. Osteoclasts are assisted by transcription factor PU.1 to degrade the bone matrix, while osteoblasts rebuild the bone matrix. Low bone mass density can then occur when osteoclasts are degrading the bone matrix faster than the osteoblasts are rebuilding the bone. The three main mechanisms by which osteoporosis develops are an inadequate peak bone mass (the skeleton develops insufficient mass and strength during growth), excessive bone resorption, and inadequate formation of new bone during remodeling, likely due to mesenchymal stem cells biasing away from the osteoblast and toward the marrow adipocyte lineage. An interplay of these three mechanisms underlies the development of fragile bone tissue. Hormonal factors strongly determine the rate of bone resorption; lack of estrogen (e.g. as a result of menopause) increases bone resorption, as well as decreasing the deposition of new bone that normally takes place in weight-bearing bones. The amount of estrogen needed to suppress this process is lower than that normally needed to stimulate the uterus and breast gland. The α-form of the estrogen receptor appears to be the most important in regulating bone turnover. In addition to estrogen, calcium metabolism plays a significant role in bone turnover, and deficiency of calcium and vitamin D leads to impaired bone deposition; in addition, the parathyroid glands react to low calcium levels by secreting parathyroid hormone (parathormone, PTH), which increases bone resorption to ensure sufficient calcium in the blood. The role of calcitonin, a hormone generated by the thyroid that increases bone deposition, is less clear and probably not as significant as that of PTH. The activation of osteoclasts is regulated by various molecular signals, of which RANKL (receptor activator of nuclear factor kappa-B ligand) is one of the best-studied. This molecule is produced by osteoblasts and other cells (e.g. lymphocytes), and stimulates RANK (receptor activator of nuclear factor κB). Osteoprotegerin (OPG) binds RANKL before it has an opportunity to bind to RANK, and hence suppresses its ability to increase bone resorption. RANKL, RANK, and OPG are closely related to tumor necrosis factor and its receptors. The role of the Wnt signaling pathway is recognized, but less well understood. Local production of eicosanoids and interleukins is thought to participate in the regulation of bone turnover, and excess or reduced production of these mediators may underlie the development of osteoporosis. Osteoclast maturation and activity is also regulated by activation of colony stimulating factor 1 receptor (CSF1R). Menopause-associated increase production of TNF-α stimulates stromal cells to produce colony stimulating factor 1 (CSF-1) which activates CSF1R and stimulates osteoclasts to reabsorb bone. Trabecular bone (or cancellous bone) is the sponge-like bone in the ends of long bones and vertebrae. Cortical bone is the hard outer shell of bones and the middle of long bones. Because osteoblasts and osteoclasts inhabit the surface of bones, trabecular bone is more active and is more subject to bone turnover and remodeling. Not only is bone density decreased, but the microarchitecture of bone is also disrupted. The weaker spicules of trabecular bone break ("microcracks"), and are replaced by weaker bone. Common osteoporotic fracture sites, the wrist, the hip, and the spine, have a relatively high trabecular bone to cortical bone ratio. These areas rely on the trabecular bone for strength, so the intense remodeling causes these areas to degenerate most when the remodeling is imbalanced. Around the ages of 30–35, cancellous or trabecular bone loss begins. Women may lose as much as 50%, while men lose about 30%. Diagnosis Osteoporosis can be diagnosed using conventional radiography and by measuring the bone mineral density (BMD). The most popular method of measuring BMD is dual-energy X-ray absorptiometry. In addition to the detection of abnormal BMD, the diagnosis of osteoporosis requires investigations into potentially modifiable underlying causes; this may be done with blood tests. Depending on the likelihood of an underlying problem, investigations for cancer with metastasis to the bone, multiple myeloma, Cushing's disease and other above-mentioned causes may be performed. Conventional radiography Conventional radiography is useful, both by itself and in conjunction with CT or MRI, for detecting complications of osteopenia (reduced bone mass; pre-osteoporosis), such as fractures; for differential diagnosis of osteopenia; or for follow-up examinations in specific clinical settings, such as soft tissue calcifications, secondary hyperparathyroidism, or osteomalacia in renal osteodystrophy. However, radiography is relatively insensitive to detection of early disease and requires a substantial amount of bone loss (about 30%) to be apparent on X-ray images. The main radiographic features of generalized osteoporosis are cortical thinning and increased radiolucency. Frequent complications of osteoporosis are vertebral fractures for which spinal radiography can help considerably in diagnosis and follow-up. Vertebral height measurements can objectively be made using plain-film X-rays by using several methods such as height loss together with area reduction, particularly when looking at vertical deformity in T4-L4, or by determining a spinal fracture index that takes into account the number of vertebrae involved. Involvement of multiple vertebral bodies leads to kyphosis of the thoracic spine, leading to what is known as dowager's hump. Dual-energy X-ray Dual-energy X-ray absorptiometry (DEXA scan) is considered the gold standard for the diagnosis of osteoporosis. Osteoporosis is diagnosed when the bone mineral density is less than or equal to 2.5 standard deviations below that of a young (30–40-year-old:58), healthy adult women reference population. This is translated as a T-score. But because bone density decreases with age, more people become osteoporotic with increasing age.:58 The World Health Organization has established the following diagnostic guidelines: The International Society for Clinical Densitometry takes the position that a diagnosis of osteoporosis in men under 50 years of age should not be made on the basis of densitometric criteria alone. It also states, for premenopausal women, Z-scores (comparison with age group rather than peak bone mass) rather than T-scores should be used, and the diagnosis of osteoporosis in such women also should not be made on the basis of densitometric criteria alone. Biomarkers Chemical biomarkers are a useful tool in detecting bone degradation. The enzyme cathepsin K breaks down type-I collagen, an important constituent in bones. Prepared antibodies can recognize the resulting fragment, called a neoepitope, as a way to diagnose osteoporosis. Increased urinary excretion of C-telopeptides, a type-I collagen breakdown product, also serves as a biomarker for osteoporosis. Other measuring tools Quantitative computed tomography (QCT) differs from DXA in that it gives separate estimates of BMD for trabecular and cortical bone and reports precise volumetric mineral density in mg/cm3 rather than BMD's relative Z-score. Among QCT's advantages: it can be performed at axial and peripheral sites, can be calculated from existing CT scans without a separate radiation dose, is sensitive to change over time, can analyze a region of any size or shape, excludes irrelevant tissue such as fat, muscle, and air, and does not require knowledge of the patient's subpopulation in order to create a clinical score (e.g. the Z-score of all females of a certain age). Among QCT's disadvantages: it requires a high radiation dose compared to DXA, CT scanners are large and expensive, and because its practice has been less standardized than BMD, its results are more operator-dependent. Peripheral QCT has been introduced to improve upon the limitations of DXA and QCT. Quantitative ultrasound has many advantages in assessing osteoporosis. The modality is small, no ionizing radiation is involved, measurements can be made quickly and easily, and the cost of the device is low compared with DXA and QCT devices. The calcaneus is the most common skeletal site for quantitative ultrasound assessment because it has a high percentage of trabecular bone that is replaced more often than cortical bone, providing early evidence of metabolic change. Also, the calcaneus is fairly flat and parallel, reducing repositioning errors. The method can be applied to children, neonates, and preterm infants, just as well as to adults. Some ultrasound devices can be used on the tibia. Screening The U.S. Preventive Services Task Force (USPSTF) recommend that all women 65 years of age or older be screened by bone densitometry. Additionally they recommend screening younger women with risk factors. There is insufficient evidence to make recommendations about the intervals for repeated screening and the appropriate age to stop screening. In men the harm versus benefit of screening for osteoporosis is unknown. Prescrire states that the need to test for osteoporosis in those who have not had a previous bone fracture is unclear. The International Society for Clinical Densitometry suggest BMD testing for men 70 or older, or those who are indicated for risk equal to that of a 70‑year‑old. A number of tools exist to help determine who is reasonable to test. Prevention Lifestyle prevention of osteoporosis is in many aspects the inverse of the potentially modifiable risk factors. As tobacco smoking and high alcohol intake have been linked with osteoporosis, smoking cessation and moderation of alcohol intake are commonly recommended as ways to help prevent it. In people with coeliac disease adherence to a gluten-free diet decreases the risk of developing osteoporosis and increases bone density. The diet must ensure optimal calcium intake (of at least one gram daily) and measuring vitamin D levels is recommended, and to take specific supplements if necessary. Osteoporosis can affect nearly 1 in 3 women and the bone loss is the most rapid within the first 2–3 years after menopause. This can be prevented by menopause hormone therapy or MHT, which is meant to prevent bone loss and the degradation of the bone microarchitecture and is noted to reduce the risk of fractures in bones by 20-30%. However, MHT has been linked to safety concerns, so it is not generally recommended. As far as management goes with this potentially limiting disease, there are practices that can and should be implemented within the daily lifestyle. For example, it would be beneficial if the individual with osteoporosis refrained from consuming excess alcohol and to avoid smoking. These individuals should also be intentional about intaking an adequate amount of protein, calcium, and vitamin D. If the woman has an even higher risk of fracture, managing this may require therapy. Generally, the recommended treatment of prevention for a decrease in bone mineral density is physical activity. Exercise is sometimes the best medicine. Resistance training is the most recommended method of physical activity but that can come in multiple forms. High intensity and high impact training is shown to be extremely beneficial in improving bone health and the most effective in improving, maintaining, bone density in the lower spine and femur. Although these types of exercises are safe for postmenopausal women, there still may be a need for supervision and precautionary measures. Nutrition Studies of the benefits of supplementation with calcium and vitamin D are conflicting, possibly because most studies did not have people with low dietary intakes. A 2018 review by the USPSTF found low-quality evidence that the routine use of calcium and vitamin D supplements (or both supplements together) did not reduce the risk of having an osteoporotic fracture in male and female adults living in the community who had no known history of vitamin D deficiency, osteoporosis, or a fracture. The USPSTF does not recommend low dose supplementation (less than 1 g of calcium and 400 IU of vitamin D) in postmenopausal women as there does not appear to be a difference in fracture risk. A 2015 review found little data that supplementation of calcium decreases the risk of fractures. While some meta-analyses have found a benefit of vitamin D supplements combined with calcium for prevention of fractures, they did not find a benefit of vitamin D supplements (800 IU/day or less) alone. Regarding adverse effects, supplementation does not appear to affect overall risk of death, although calcium supplementation could potentially be associated with some increased risk of myocardial infarctions, stroke, kidney stones, and gastrointestinal symptoms. There is no evidence that supplementation before menopause can enhance bone mineral density. Vitamin K deficiency is also a risk factor for osteoporotic fractures. The gene gamma-glutamyl carboxylase (GGCX) is dependent on vitamin K. Functional polymorphisms in the gene could attribute to variation in bone metabolism and BMD. Vitamin K2 is also used as a means of treatment for osteoporosis and the polymorphisms of GGCX could explain the individual variation in the response to treatment of vitamin K. Dietary sources of calcium include dairy products, leafy greens, legumes, and beans. There has been conflicting evidence about whether or not dairy is an adequate source of calcium to prevent fractures. The National Academy of Sciences recommends 1,000 mg of calcium for those aged 19–50, and 1,200 mg for those aged 50 and above. A review of the evidence shows no adverse effect of higher protein intake on bone health. Physical exercise Evidence suggests that exercise can help promote bone health in older people. In particular, physical exercise can be beneficial for bone density in postmenopausal women, and lead to a slightly reduced risk of a bone fracture (absolute difference 4%). Weight bearing exercise has been found to cause an adaptive response in the skeleton, promoting osteoblast activity and protecting bone density. A position statement concluded that increased bone activity and weight-bearing exercises at a young age prevent bone fragility in adults. Limitations in the available evidence hinder the production of detailed evidence-based exercise recommendations. Some expert consensus guidance does exist. International guidelines recommend multicomponent exercise tailored to individual needs that includes "balance and mobility training, paired with weight bearing exercise, progressive resistance training, and posture exercises" (generally accompanied by optimal nutrition). Cycling and swimming are not considered weight-bearing exercise, and neither helps slow age-related bone loss (professional bicycle racing has a negative effect on bone density). Risk of adverse events from the types of exercise usually considered appropriate for people with osteoporosis is generally low (though repeated forceful forward spinal bends are discouraged). For people who have had vertebral fractures, there is moderate-quality evidence that exercise is likely to improve physical performance, as well as some low-quality evidence suggesting that exercise may reduce pain and improve quality of life. Physical exercise prescription Osteoporosis is a very prevalent disease in the elderly population but not much is known about the optimal prescription and dosage of physical exercise to help prevent bone mineral loss. A lot of the focus around osteoporosis is also prevention and not so much maintenance which should be the front runner when considering what approach to take. When prescribing exercise, an aspect to take into consideration is the individual’s need this can be attained by conducting a pre-exercise evaluation or screening, exercise should also be tailored to the individual and what works for them. Important things often overlooked when treating osteoporosis are muscle strength and maintenance of BMD, which should be incorporated into the program to optimize the benefits of exercise. This entails including exercises that focus on and improve muscle strength and exercises that focus on and improve skeletal strength or BMD as these go hand in hand for reducing fall and fracture risk. It’s also important to reference the ACSM general training principle to better design a program for the individual. Which mode of exercise and dosage has been a recurring question for treating osteoporosis, many articles have found that multimodal exercise programs have had findings of significant improvement in factors related to osteoporosis. Factors include lower limb strength, balance, flexibility, and risk of falls. Other modes of exercise have also proven to improve individuals with osteoporosis, some of these modes include weight-bearing, resistance specifically progressive resistance, and aerobic exercise. The recommendations for these types of exercises are as follows, weight-bearing exercise should be done 4–7 days a week, moderate to high intensity, activities should be multidirectional, and load should be more than typical everyday load on bones. Some examples of exercises are jumping, skipping, hopping, depth jumps, etc.  Recommended dosage for progressive resistance training is 2 or more days a week, intensity (load) should start low and increase gradually. Resistance training should focus on major muscle groups used for functional movements as well as muscles that have direct stress on bones susceptible to fracture. Considerations for resistance training are to teach proper lifting techniques and be careful with lifting weights above the head. Lastly, aerobic exercise has minimal effect on preventing BMD loss unless done at a higher intensity or with a load like a weighted vest. Considerations with this mode are that this may cause a higher risk of fall or fracture. Improvements can also be observed in other ways, such as decreased Timed-Up-and-Go, increased Sit-To-Stand, and increased One-Leg-Stance-Test. A study with a 12-week exercise intervention on postmenopausal osteoporotic women observed a 2.27 decrease in TUG times in their experimental group. The overall thing to note when prescribing exercise for individuals with osteoporosis is to evaluate the individual's needs and then individualize their program with multiple exercise modalities that work for them, emphasizing increasing muscle strength as well as maintaining bone mass. Physical therapy People with osteoporosis are at higher risk of falls due to poor postural control, muscle weakness, and overall deconditioning. Postural control is important to maintaining functional movements such as walking and standing. Physical therapy may be an effective way to address postural weakness that may result from vertebral fractures, which are common in people with osteoporosis. Physical therapy treatment plans for people with vertebral fractures include balance training, postural correction, trunk and lower extremity muscle strengthening exercises, and moderate-intensity aerobic physical activity. The goal of these interventions are to regain normal spine curvatures, increase spine stability, and improve functional performance. Physical therapy interventions were also designed to slow the rate of bone loss through home exercise programs. Whole body vibration therapy has also been suggested as a physical therapy intervention. Moderate to low-quality evidence indicates that whole body vibration therapy may reduce the risk of falls. There are conflicting reviews as to whether vibration therapy improves bone mineral density. Physical therapy can aid in overall prevention in the development of osteoporosis through therapeutic exercise. Prescribed amounts of mechanical loading or increased forces on the bones promote bone formation and vascularization in various ways, therefore offering a preventative measure that is not reliant on drugs. Specific exercise interacts with the body's hormones and signaling pathways which encourages the maintenance of a healthy skeleton. Hormone therapy Reduced estrogen levels increase the risk of osteoporosis, so hormone replacement therapy when women reach the menopause may reduce the incidence of osteoporosis. A more natural way of restoring hormone levels in postmenopausal women include participating in specific forms of exercise. Weight-bearing exercises and resistance training exercises such as squats with weights, step-ups, lunges, stair climbing, and even jogging can elicit hormone responses that are advantageous for post-menopausal women living with osteoporosis. These exercises result in the release of growth hormone and Insulin-like growth factor-1 or IGF-1 that participate in bone remodeling. Stress is applied to the bones, thus activating osteoblast, which are cells that form new bones and grow and heal existing bones while restoring hormones that increase bone density. Resistance training exercises, like weight lifting, can lead to brief increased in anabolic hormones, like testosterone, which aid in muscle and bone strength. The increase in mechanical tension during resistance exercise will likely help stimulate the production of Insulin-like growth factors in the bone, but at a greater extent. Post-menopausal women experience a reduction of estrogen, which is essential for density, so these exercise-induced hormonal enhancements can counteract the loss of bone mineral density in the most critical area, like the lumbar spine and the femoral neck. Research suggest that regular resistance training accompanied with weight-bearing activities help reduce the progression of osteoporosis and risk of fracture. Management Lifestyle Changes Weight-bearing endurance exercise and/or exercises to strengthen muscles improve bone strength in those with osteoporosis. Aerobics, weight bearing, and resistance exercises all maintain or increase BMD in postmenopausal women. Daily intake of calcium and vitamin D is recommended for postmenopausal women. Fall prevention can help prevent osteoporosis complications. There is some evidence for hip protectors specifically among those who are in care homes. Pharmacologic therapy The US National Osteoporosis Foundation recommends pharmacologic treatment for patients with hip or spine fracture thought to be related to osteoporosis, those with BMD 2.5 SD or more below the young normal mean (T-score -2.5 or below), and those with BMD between 1 and 2.5 SD below normal mean whose 10-year risk, using FRAX, for hip fracture is equal or more than 3%. Bisphosphonates are useful in decreasing the risk of future fractures in those who have already sustained a fracture due to osteoporosis. This benefit is present when taken for three to four years. They do not appear to change the overall risk of death. Tentative evidence does not support the use of bisphosphonates as a standard treatment for secondary osteoporosis in children. Different bisphosphonates have not been directly compared, therefore it is unknown if one is better than another. Fracture risk reduction is between 25 and 70% depending on the bone involved. There are concerns of atypical femoral fractures and osteonecrosis of the jaw with long-term use, but these risks are low. With evidence of little benefit when used for more than three to five years and in light of the potential adverse events, it may be appropriate to stop treatment after this time. One medical organization recommends that after five years of medications by mouth or three years of intravenous medication among those at low risk, bisphosphonate treatment can be stopped. In those at higher risk they recommend up to ten years of medication by mouth or six years of intravenous treatment. The goal of osteoporosis management is to prevent osteoporotic fractures, but for those who have sustained one already it is more urgent to prevent a secondary fracture. That is because patients with a fracture are more likely to experience a recurrent fracture, with marked increase in morbidity and mortality compared. Among the five bisphosphonates, no significant differences were found for a secondary fracture for all fracture endpoints combined. That being said, alendronate was identified as the most efficacious for secondary prevention of vertebral and hip fractures while zoledronate showed better performance for nonvertebral non-hip fracture prevention. There is concern that many people do not receive appropriate pharmacological therapy after a low-impact fracture. For those with osteoporosis but who have not had a fracture, evidence does not support a reduction in fracture risk with risedronate or etidronate. Alendronate decreases fractures of the spine but does not have any effect on other types of fractures. Half stop their medications within a year. When on treatment with bisphosphonates rechecking bone mineral density is not needed. There is tentative evidence of benefit in males with osteoporosis. Fluoride supplementation does not appear to be effective in postmenopausal osteoporosis, as even though it increases bone density, it does not decrease the risk of fractures. Teriparatide (a recombinant parathyroid hormone) has been shown to be effective in treatment of women with postmenopausal osteoporosis. Some evidence also indicates strontium ranelate is effective in decreasing the risk of vertebral and nonvertebral fractures in postmenopausal women with osteoporosis. Hormone replacement therapy, while effective for osteoporosis, is only recommended in women who also have menopausal symptoms. It is not recommended for osteoporosis by itself. Raloxifene, while effective in decreasing vertebral fractures, does not affect the risk of nonvertebral fracture. And while it reduces the risk of breast cancer, it increases the risk of blood clots and strokes. While denosumab is effective at preventing fractures in women, there is not clear evidence of benefit in males. In hypogonadal men, testosterone has been shown to improve bone quantity and quality, but, as of 2008, no studies evaluated its effect on fracture risk or in men with normal testosterone levels. Calcitonin while once recommended is no longer recommended due to the associated risk of cancer and questionable effect on fracture risk. Alendronic acid/colecalciferol can be taken to treat this condition in post-menopausal women. Romosozumab (sold under the brand name Evenity) is a monoclonal antibody against sclerostin. Romosozumab is usually reserved for patients with very high fracture risk and is the only available drug therapy for osteoporosis that leads to simultaneous inhibition of bone resorption together with an anabolic effect. Certain medications like alendronate, etidronate, risedronate, raloxifene, and strontium ranelate can help to prevent osteoporotic fragility fractures in postmenopausal women with osteoporosis. Tentative evidence suggests that Chinese herbal medicines may have potential benefits on bone mineral density. Prognosis Although people with osteoporosis have increased mortality due to the complications of fracture, the fracture itself is rarely lethal. Hip fractures can lead to decreased mobility and additional risks of numerous complications (such as deep venous thrombosis and/or pulmonary embolism, and pneumonia). The six-month mortality rate for those aged 50 and above following hip fracture was found to be around 13.5%, with a substantial proportion (almost 13%) needing total assistance to mobilize after a hip fracture. Vertebral fractures, while having a smaller impact on mortality, can lead to severe chronic pain of neurogenic origin, which can be hard to control, as well as deformity. Though rare, multiple vertebral fractures can lead to such severe hunchback (kyphosis), the resulting pressure on internal organs can impair one's ability to breathe. Apart from risk of death and other complications, osteoporotic fractures are associated with a reduced health-related quality of life. The condition is responsible for millions of fractures annually, mostly involving the lumbar vertebrae, hip, and wrist. Fragility fractures of ribs are also common in men. Fractures Hip fractures are responsible for the most serious consequences of osteoporosis. In the United States, more than 250,000 hip fractures annually are attributable to osteoporosis. A 50-year-old white woman is estimated to have a 17.5% lifetime risk of fracture of the proximal femur. The incidence of hip fractures increases each decade from the sixth through the ninth for both women and men for all populations. The highest incidence is found among men and women ages 80 or older. Between 35 and 50% of all women over 50 had at least one vertebral fracture. In the United States, 700,000 vertebral fractures occur annually, but only about a third are recognized. In a series of 9704 women aged 68.8 on average studied for 15 years, 324 had already sustained a vertebral fracture at entry into the study and 18.2% developed a vertebral fracture, but that risk rose to 41.4% in women who had a previous vertebral fracture. In the United States, 250,000 wrist fractures annually are attributable to osteoporosis. Wrist fractures are the third most common type of osteoporotic fractures. The lifetime risk of sustaining a Colles' fracture is about 16% for white women. By the time women reach age 70, about 20% have had at least one wrist fracture. Fragility fractures of the ribs are common in men as young as age 35. These are often overlooked as signs of osteoporosis, as these men are often physically active and develop the fracture in the course of physical activity, such as falling while water skiing or jet skiing. Epidemiology Osteoporosis becomes more common with age, especially after 50 years (its prevalence rises from about 2% at 50 years to almost 50% by the age of 80). It affects women more than men due to the sharp fall in estrogen production that follows menopause. Globally, it is estimated that 21.2% of women and 6.3% of men over the age of 50 have osteoporosis, corresponding to a total of around 500 million people worldwide. About 15% of Caucasians in their 50s and 70% of those over 80 are affected. In the developed world, depending on the method of diagnosis, 2% to 8% of males and 9% to 38% of females are affected. Rates of disease in the developing world are unclear. From the age of 50 onwards, fractures (including hip fractures) are roughly twice as common in women than in men. A 60-year-old woman has a 44% chance of experiencing a fracture in her lifetime, whereas the lifetime risk for a 60-year-old man is only 25%. Such differences can be attributed to the increased risk of osteoporosis due to decreased estrogen levels following menopause. In 2019, up to 37 million fragility fractures linked to osteoporosis were thought to occur in people over the age of 55 worldwide. Globally, 1 in 3 women and 1 in 5 men over the age of 50 will have an osteoporotic fracture. Data from the United States shows a decrease in osteoporosis within the general population and in white women, from 18% in 1994 to 10% in 2006. White and Asian people are at greater risk. People of African descent are at a decreased risk of fractures due to osteoporosis, although they have the highest risk of death following an osteoporotic fracture. It has been shown that latitude affects risk of osteoporotic fracture. Areas of higher latitude such as Northern Europe receive less Vitamin D through sunlight compared to regions closer to the equator, and consequently have higher fracture rates in comparison to lower latitudes. For example, Swedish men and women have a 13% and 28.5% risk of hip fracture by age 50, respectively, whereas this risk is only 1.9% and 2.4% in Chinese men and women. Diet may also be a factor that is responsible for this difference, as vitamin D, calcium, magnesium, and folate are all linked to bone mineral density. There is also an association between Celiac Disease and increased risk of osteoporosis. In studies with premenopausal females and males, there was a correlation between Celiac Disease and osteoporosis and osteopenia. Celiac Disease can decrease absorption of nutrients in the small intestine such as calcium, and a gluten-free diet can help people with Celiac Disease to revert to normal absorption in the gut. About 22 million women and 5.5 million men in the European Union had osteoporosis in 2010. In the United States in 2010 about 8 million women and one to 2 million men had osteoporosis. This places a large economic burden on the healthcare system due to costs of treatment, long-term disability, and loss of productivity in the working population. The EU spends 37 billion euros per year in healthcare costs related to osteoporosis, and the US spends an estimated US$19 billion annually for related healthcare costs. History Research on age-related reductions in bone density goes back to the early 1800s. French pathologist Jean Lobstein coined the term osteoporosis. The American endocrinologist Fuller Albright linked osteoporosis with the postmenopausal state. Anthropologists have studied skeletal remains that showed loss of bone density and associated structural changes that were linked to a chronic malnutrition in the agricultural area in which these individuals lived. "It follows that the skeletal deformation may be attributed to their heavy labor in agriculture as well as to their chronic malnutrition", causing the osteoporosis seen when radiographs of the remains were made.
Biology and health sciences
Specific diseases
Health
22469
https://en.wikipedia.org/wiki/Ontogeny
Ontogeny
Ontogeny (also ontogenesis) is the origination and development of an organism (both physical and psychological, e.g., moral development), usually from the time of fertilization of the egg to adult. The term can also be used to refer to the study of the entirety of an organism's lifespan. Ontogeny is the developmental history of an organism within its own lifetime, as distinct from phylogeny, which refers to the evolutionary history of a species. Another way to think of ontogeny is that it is the process of an organism going through all of the developmental stages over its lifetime. The developmental history includes all the developmental events that occur during the existence of an organism, beginning with the changes in the egg at the time of fertilization and events from the time of birth or hatching and afterward (i.e., growth, remolding of body shape, development of secondary sexual characteristics, etc.). While developmental (i.e., ontogenetic) processes can influence subsequent evolutionary (e.g., phylogenetic) processes (see evolutionary developmental biology and recapitulation theory), individual organisms develop (ontogeny), while species evolve (phylogeny). Ontogeny, embryology and developmental biology are closely related studies and those terms are sometimes used interchangeably. Aspects of ontogeny are morphogenesis, the development of form and shape of an organism; tissue growth; and cellular differentiation. The term ontogeny has also been used in cell biology to describe the development of various cell types within an organism. Ontogeny is a useful field of study in many disciplines, including developmental biology, cell biology, genetics, developmental psychology, developmental cognitive neuroscience, and developmental psychobiology. Ontogeny is used in anthropology as "the process through which each of us embodies the history of our own making". Etymology The word ontogeny comes from the Greek on meaning a being, individual; and existence, and from the suffix -geny from the Greek -geniea, meaning genesis, origin, and mode of production. History The term ontogeny was coined by Ernst Haeckel, a German zoologist and evolutionist in the 1860s. Haeckel, born in Germany on February 16, 1834, was also a strong supporter of Darwinism. Haeckel suggested that ontogeny briefly and sometimes incompletely recapitulated or repeated phylogeny in his 1866 book, Generelle Morphologie der Organismen ("General Morphology of Organisms"). Even though his book was widely read, the scientific community was not very convinced or interested in his ideas, so he turned to producing more publications to get more attention. In 1866, Haeckel and others imagined development as producing new structures after earlier additions to the developing organism have been established. He proposed that individual development followed developmental stages of previous generations and that the future generations would add something new to this process, and that there was a causal parallelism between an animal's ontogeny and phylogeny. In addition, Haeckel suggested a biogenetic law that ontogeny recapitulates phylogeny, based on the idea that the successive and progressive origin of new species was based on the same laws as the successive and progressive origin of new embryonic structures. According to Haeckel, development produced novelties, and natural selection would eliminate species that had become outdated or obsolete. Though his view of development and evolution wasn't justifiable, future embryologists tweaked and collaborated with Haeckel's proposals and showed how new morphological structures can occur by the hereditary modification of embryonic development. Marine biologist Walter Garstang reversed Haeckel's relationship between ontogeny and phylogeny, stating that ontogeny creates phylogeny, not recapitulates it. A seminal 1963 paper by Nikolaas Tinbergen named ontogeny as one of the four primary questions of biology, along with Julian Huxley's three others: causation, survival value and evolution. Tinbergen emphasized that the change of behavioral machinery during development was distinct from the change in behavior during development. We can conclude that the thrush itself, i.e. its behavioral machinery, has changed only if the behavior change occurred while the environment was held constant...When we turn from description to causal analysis, and ask in what way the observed change in behavior machinery has been brought about, the natural first step is to try and distinguish between environmental influences and those within the animal...In ontogeny the conclusion that a certain change is internally controlled (is 'innate') is reached by elimination. Tinbergen was concerned that the elimination of environmental factors is difficult to establish, and the use of the word innate is often misleading. Developmental stages Development of an organism happens through fertilization, cleavage, blastulation, gastrulation, organogenesis, and metamorphosis into an adult. Each species of animal has a slightly different journey through these stages, since some stages might be shorter or longer when compared to other species, and where the offspring develops is different for each animal type (e.g., in a hard egg shell, uterus, soft egg shell, on a plant leaf, etc.). Fertilization In humans, the process of fetal development starts after sperm fertilizes an egg and they fuse together, kickstarting embryonic development. The fusion of egg and sperm into a zygote changes the surrounding membrane to not allow any more sperm to penetrate the egg, so multiple fertilizations can be prevented. Fusion of a zygote also activates the egg so it can begin undergoing cell division. Each animal species might not have specifically a sperm and an egg, but two gametes that contain half of the species' typical genetic material and the membranes of these gametes fuse to start creating an offspring. Cleavage Not long after successful fertilization by sperm, the zygote undergoes many mitotic divisions, which are also non-sexual cell divisions. Cleavage is the process of cell division, so the starting zygote becomes a collection of identical cells which is a morula and contains cells called blastomeres. Cleavage prepares the zygote to become an embryo, which is from 2 weeks to 8 weeks after conception (fertilization) in humans. Blastulation After the zygote has become an embryo, it continues dividing into a hollow sphere of cells, which is a blastula. These outer cells form a single epithelial layer, the blastoderm, that essentially encases the fluid-filled inside that is the blastocoel. The figure to the right shows the basic process that is modified in different species. Blastulation differs slightly in different species, but in mammals, the eight-cell stage embryo forms into a slightly different type of blastula, called a blastocyst. Other species such as sea stars, frogs, chicks, and mice have all the same structures in this stage, yet the orientation of these features differs, plus these species have additional types of cells in this stage. Gastrulation After blastulation, the single-layered blastula expands and reorganizes into multiple layers, a gastrula (seen in the figure to the right). Reptiles, birds and mammals are triploblastic organisms, meaning the gastrula comprises three germ layers; the endoderm (inner layer), mesoderm (middle layer), and ectoderm (outer layer). As seen in the figure below, each germ layer will become multi-potent stem cells that can become a specific tissue depending on the germ layer and is what happens in humans. This differentiation of germ layers differs slightly, because not all of the organs and tissues below are in all organisms, but corresponding body systems can be substituted in place of these. Organogenesis In the figure below, human germ cells are able to differentiate into the specific organs and tissues they become later on in life. Germ cells are able to migrate to their final locations to rearrange themselves and some organs are made of two germ layers; one for the outside, the other for the inside. The endoderm cells become the internal linings of organisms, such as the stomach, colon, small intestine, liver, and pancreas of the digestive system and the lungs. The mesoderm gives rise to other tissues not formed by the ectoderm, such as the heart, muscles, bones, blood, dermis of the skin, bone marrow, and the urogenital system. This germ layer is more specific for species, as it is the distinguishing layer of the three that can identify evolutionarily higher life-forms (e.g., bilateral organisms like humans) from lower-life forms (with radial symmetry). Lastly, the ectoderm is the outer layer of cells that become the epidermis and hair while being the precursor to the mammary glands, central nervous system, and the peripheral nervous systems. The figure above shows how the development of a pig, cow, rabbit, and human offspring are similar when compared to one another. This figure shows how the germ layers can become different organs and tissues in evolutionarily higher life-forms and how these species essentially develop very similarly. Additionally, it shows how multiple species develop in a parallel manner but branch off to develop more specific features for the organism such as hooves, a tail, or ears. Neurulation In developing vertebrate offspring, a neural tube is formed through either primary or secondary neurulation. Some species develop their spine and nervous system using both primary and secondary neurulation, while others use only primary or secondary neurulation. In human fetal development, primary neurulation occurs during weeks 3 and 4 of gestation to develop the brain and spinal cord. Then during weeks 5 and 6 of gestation, secondary neurulation forms the lower sacral and coccygeal cord. Primary Neurulation The diagram to the right illustrates primary neurulation, which is the process of cells surrounding the neural plate interacting with neural plate cells to proliferate, converge, and pinch off to form a hollow tube above the notochord and mesoderm. This process is discontinuous and can start at different points along the cranial-caudal axis necessary for it to close. After the neural crest closes, the neural crest cells and ectoderm cells separate and the ectoderm becomes the epidermis surrounding this complex. The neural crest cells differentiate to become components of most of the peripheral nervous system in animals. Next, the notochord degenerates to become only the nucleus pulposus of the intervertebral discs and the mesoderm cells differentiate to become the somites and skeletal muscle later on. Also during this stage, the neural crest cells become the spinal ganglions, which function as the brain in organisms like earthworms and arthropods. In more advanced organisms like amphibians, birds and mammals; the spinal ganglions consists of a cluster of nerve bodies positioned along the spinal cord at the dorsal and ventral roots of a spinal nerve, which is a pair of nerves that correspond to a vertebra of the spine. Secondary Neurulation In secondary neurulation, caudal and sacral regions of the spine are formed after primary neurulation is finished. This process initiates once primary neurulation is finished and the posterior neuropore closes, so the tail bud can proliferate and condense, then create a cavity and fuse with the central canal of the neural tube. Secondary neurulation occurs in the small region starting at the spinal tail bud up to the posterior neuropore, which is the open neural folds near the tail region that don't close through primary neurulation. As canalization progresses over the next few weeks, neurons and ependymal cells (cells that create cerebral spinal fluid) differentiate to become the tail end of the spinal cord. Next, the closed neural tube contains neuroepithelial cells that immediately divide after closure and a second type of cell forms; the neuroblast. Neuroblast cells form the mantle layer, which later becomes the gray matter, which then gives rise to a marginal layer that becomes the white matter of the spinal cord. Secondary neurulation is seen in the neural tube of the lumbar and tail vertebrae of frogs and chicks and in both instances, this process is like a continuation of gastrulation. Larval and juvenile phases In most species, the young organism that is just born or hatched is not sexually mature yet and in most animals, this young organism looks quite different than the adult form. This young organism is the larva and is the intermediate form before metamorphosing into an adult. A well known example of a larval form of an animal is the caterpillar of butterflies and moths. Caterpillars keep growing and feeding in order for enough energy during the pupal stage, when necessary body parts for metamorphosis are grown. The juvenile phase is different in plants and animals, but in plants juvenility is an early phase of plant growth in which plants can't flower. In animals, the juvenile stage is most commonly found in social mammals, such as wild dogs, monkeys, apes, lions, wolves, and more. In humans, puberty marks the end of this stage and adolescence follows. Some species begin puberty and reproduction before the juvenile stage is over, such as in female non-human primates. The larval and pupal stages can be seen in the figure to the right. Metamorphosis The process of an organism's body undergoing structural and physical changes after birth or hatching to become suitable for its adult environment is metamorphosis. For example, amphibian tadpoles have a maturation of liver enzymes, hemoglobin, and eye pigments, in addition to their nervous, digestive, and reproductive systems being remodeled. In all species, molting and juvenile hormones appear to regulate these changes. The figure to the right shows the stages of life in butterflies and their metamorphosis transforms the caterpillar into a butterfly. Adulthood Adulthood is the stage of when physical and intellectual maturity have been achieved and this differs between species. In humans, adulthood is thought to be around 20 or 21 years old and is the longest stage of life, but in all species it ends with death. In dogs, small breeds (e.g., Yorkshire Terrier, Chihuahua, Cocker Spaniel, etc.) physically mature faster than large breeds (e.g., Saint Bernard, Great Dane, Golden Retriever, etc.), so adulthood is reached anywhere from 12 to 24 months or 1 to 2 years. In contrast, many insect species have long larval stages and the adult stage is only for reproduction. The silkworm moths don't have mouthparts and don't feed, so they have to consume enough food during the larval stage for energy to survive and mate. Senescence Senescence is when cells stop dividing but don't die, but these cells can build up and cause problems in the body. These cells can release substances that cause inflammation and can damage healthy nearby cells. Senescence can be induced by un-repaired DNA damage (e.g., from radiation, old age, etc.) or other cellular stress and also is the state of being old. Ontogenetic allometry Most organisms undergo allometric changes in shape as they grow and mature, while others engage in metamorphosis. Even reptiles (non-avian sauropsids, e.g., crocodilians, turtles, snakes, and lizards), in which the offspring are often viewed as miniature adults, show a variety of ontogenetic changes in morphology and physiology.
Biology and health sciences
Animal ontogeny
null
22472
https://en.wikipedia.org/wiki/Ophiuchus
Ophiuchus
Ophiuchus () is a large constellation straddling the celestial equator. Its name comes from the Ancient Greek (), meaning "serpent-bearer", and it is commonly represented as a man grasping a snake. The serpent is represented by the constellation Serpens. Ophiuchus was one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations. An old alternative name for the constellation was Serpentarius. Location Ophiuchus lies between Aquila, Serpens, Scorpius, Sagittarius, and Hercules, northwest of the center of the Milky Way. The southern part lies between Scorpius to the west and Sagittarius to the east. In the northern hemisphere, it is best visible in summer. It is opposite of Orion. Ophiuchus is depicted as a man grasping a serpent; the interposition of his body divides the snake constellation Serpens into two parts, Serpens Caput and Serpens Cauda. Ophiuchus straddles the equator with the majority of its area lying in the southern hemisphere. Rasalhague, its brightest star, lies near the northern edge of Ophiuchus at about declination. The constellation extends southward to −30° declination. Segments of the ecliptic within Ophiuchus are south of −20° declination (see chart at right). In contrast to Orion, from November to January (summer in the Southern Hemisphere, winter in the Northern Hemisphere), Ophiuchus is in the daytime sky and thus not visible at most latitudes. However, for much of the polar region north of the Arctic Circle in the Northern Hemisphere's winter months, the Sun is below the horizon even at midday. Stars (and thus parts of Ophiuchus, especially Rasalhague) are then visible at twilight for a few hours around local noon, low in the south. In the Northern Hemisphere's spring and summer months, when Ophiuchus is normally visible in the night sky, the constellation is actually not visible, because the midnight sun obscures the stars at those times and places in the Arctic. In countries close to the equator, Ophiuchus appears overhead in June around midnight and in the October evening sky. Features Stars The brightest stars in Ophiuchus include α Ophiuchi, called Rasalhague ("head of the serpent charmer"), at magnitude 2.07, and η Ophiuchi, known as Sabik ("the preceding one"), at magnitude 2.43. Alpha Ophiuchi is composed of an A-type (bluish-white) giant star and a K-type main sequence star. The primary is a rapid rotator with an inclined axis of rotation. Eta Ophiuchi is a binary system. Other bright stars in the constellation include β Ophiuchi, Cebalrai ("dog of the shepherd") and λ Ophiuchi, or Marfik ("the elbow"). Beta Ophiuchi is an evolved red giant star that is slightly more massive than the Sun. Lambda Ophiuchi is a binary star system with the primary being more massive and luminous than the Sun. RS Ophiuchi is part of a class called recurrent novae, whose brightness increase at irregular intervals by hundreds of times in a period of just a few days. It is thought to be at the brink of becoming a type-1a supernova. It erupts around every 15 years and usually has a magnitude of around 5.0 during eruptions, most recently in 2021. Barnard's Star, one of the nearest stars to the Solar System (the only stars closer are the Alpha Centauri binary star system and Proxima Centauri), lies in Ophiuchus. It is located to the left of β and just north of the V-shaped group of stars in an area that was once occupied by the now-obsolete constellation of Taurus Poniatovii (Poniatowski's Bull). It is thought that an exoplanet orbits around the star, but later studies have refuted this claim. In 1998, an intense flare was observed. The star has also been a target of plans for interstellar travel such as Project Daedalus. In 2005, astronomers using data from the Green Bank Telescope discovered a superbubble so large that it extends beyond the plane of the galaxy. It is called the Ophiuchus Superbubble. In April 2007, astronomers announced that the Swedish-built Odin satellite had made the first detection of clouds of molecular oxygen in space, following observations in the constellation Ophiuchus. The supernova of 1604 was first observed on 9 October 1604, near θ Ophiuchi. Johannes Kepler saw it first on 16 October and studied it so extensively that the supernova was subsequently called Kepler's Supernova. He published his findings in a book titled De stella nova in pede Serpentarii (On the New Star in Ophiuchus's Foot). Galileo used its brief appearance to counter the Aristotelian dogma that the heavens are changeless. It was a Type Ia supernova and the most recent Milky Way supernova visible to the unaided eye. In 2009 it was announced that GJ 1214, a star in Ophiuchus, undergoes repeated, cyclical dimming with a period of about 1.5 days consistent with the transit of a small orbiting planet. The planet's low density (about 40% that of Earth) suggests that the planet might have a substantial component of low-density gas—possibly hydrogen or steam. The proximity of this star to Earth (42 light years) makes it a feasible target for further observations. The host star emits X-rays which could have removed mass from the exoplanet. In April 2010, the naked-eye star ζ Ophiuchi was occulted by the asteroid 824 Anastasia. Deep-sky objects Ophiuchus contains several star clusters, such as IC 4665, NGC 6633, M9, M10, M12, M14, M19, M62, and M107, as well as the nebula IC 4603-4604. M9 is a globular cluster which may have an extra-galactic origin. M10 is a fairly close globular cluster, only 20,000 light-years from Earth. It has a magnitude of 6.6 and is a Shapley class VII cluster. This means that it has "intermediate" concentration; it is only somewhat concentrated towards its center. M12 is a globular cluster which is around 5 kiloparsecs from the Solar System. M14 is another globular cluster which is somewhat farther away. Globular cluster M19 is oblate-shaped with multiple different types of variable stars. M62 is a globular cluster rich in variable stars such as RR Lyrae variables and has two generations of stars with different element abundances. M107 is also rich in variable stars. The unusual galaxy merger remnant and starburst galaxy NGC 6240 is also in Ophiuchus. At a distance of 400 million light-years, this "butterfly-shaped" galaxy has two supermassive black holes 3,000 light-years apart. Confirmation of the fact that both nuclei contain black holes was obtained by spectra from the Chandra X-ray Observatory. Astronomers estimate that the black holes will merge in another billion years. NGC 6240 also has an unusually high rate of star formation, classifying it as a starburst galaxy. This is likely due to the heat generated by the orbiting black holes and the aftermath of the collision. Both have active galactic nuclei. In 2006, a new nearby star cluster was discovered associated with the 4th magnitude star Mu Ophiuchi. The Mamajek 2 cluster appears to be a poor cluster remnant analogous to the Ursa Major Moving Group, but 7 times more distant (approximately 170 parsecs away). Mamajek 2 appears to have formed in the same star-forming complex as the NGC 2516 cluster roughly 135 million years ago. Barnard 68 is a large dark nebula, located 410 light-years from Earth. Despite its diameter of 0.4 light-years, Barnard 68 only has twice the mass of the Sun, making it both very diffuse and very cold, with a temperature of about 16 kelvins. Though it is currently stable, Barnard 68 will eventually collapse, inciting the process of star formation. One unusual feature of Barnard 68 is its vibrations, which have a period of 250,000 years. Astronomers speculate that this phenomenon is caused by the shock wave from a supernova. Barnard 68 has blocked thousands of stars visible at other wavelengths and the distribution of dust in Barnard 68 has been mapped. The space probe Voyager 1, the furthest man-made object from earth, is traveling in the direction of Ophiuchus. It is located between α Herculis, α Ophiuchi and κ Ophiuchi at right ascension 17h 13m and declination +12° 25’ (July 2020). In November 2022, the USA's NSF NOIRLab (National Optical-Infrared Astronomy Research Laboratory) announced the unambiguous identification of the nearest stellar black hole orbited by a G-type main-sequence star, the system identified as Gaia BH1 at around 1,560 light years from the Sun. History and mythology There is no evidence of the constellation preceding the classical era, and in Babylonian astronomy, a "Sitting Gods" constellation seems to have been located in the general area of Ophiuchus. However, Gavin White proposes that Ophiuchus may in fact be remotely descended from this Babylonian constellation, representing Nirah, a serpent-god who was sometimes depicted with his upper half human but with serpents for legs. The earliest mention of the constellation is in Aratus, informed by the lost catalogue of Eudoxus of Cnidus (4th century BC): To the ancient Greeks, the constellation represented the god Apollo struggling with a huge snake that guarded the Oracle of Delphi. Later myths identified Ophiuchus with Laocoön, the Trojan priest of Poseidon, who warned his fellow Trojans about the Trojan Horse and was later slain by a pair of sea serpents sent by the gods to punish him. According to Roman era mythography, the figure represents the healer Asclepius, who learned the secrets of keeping death at bay after observing one serpent bringing another healing herbs. To prevent the entire human race from becoming immortal under Asclepius' care, Jupiter killed him with a bolt of lightning, but later placed his image in the heavens to honor his good works. In medieval Islamic astronomy (Azophi's Uranometry, 10th century), the constellation was known as Al-Ḥawwa''', "the snake-charmer". Aratus describes Ophiuchus as trampling on Scorpius with his feet. This is depicted in Renaissance to Early Modern star charts, beginning with Albrecht Dürer in 1515; in some depictions (such as that of Johannes Kepler in De Stella Nova, 1606), Scorpius also seems to threaten to sting Serpentarius in the foot. This is consistent with Azophi, who already included ψ Oph and ω Oph as the snake-charmer's "left foot", and θ Oph and ο Oph as his "right foot", making Ophiuchus a zodiacal constellation at least as regards his feet. This arrangement has been taken as symbolic in later literature and placed in relation to the words spoken by God to the serpent in the Garden of Eden (Genesis 3:15). Zodiac Ophiuchus is one of the 13 constellations that cross the ecliptic. It has sometimes been suggested as the "13th sign of the zodiac". However, this confuses zodiac or astrological signs with constellations. The signs of the zodiac are a 12-fold division of the ecliptic, so that each sign spans 30° of celestial longitude, approximately the distance the Sun travels in a month, and (in the Western tradition) are aligned with the seasons so that the March equinox always falls on the boundary between Pisces and Aries. Constellations, on the other hand, are unequal in size and are based on the positions of the stars. The constellations of the zodiac have only a loose association with the signs of the zodiac, and do not in general coincide with them. In Western astrology the constellation of Aquarius, for example, largely corresponds to the sign of Pisces. Similarly, the constellation of Ophiuchus occupies most (29 November – 18 December) of the sign of Sagittarius (23 November – 21 December). The differences are due to the fact that the time of year that the Sun passes through a particular zodiac constellation's position has slowly changed (because of the precession of the Earth's rotational axis) over the centuries from when the Babylonians originally developed the zodiac. Citations
Physical sciences
Other
Astronomy
22475
https://en.wikipedia.org/wiki/Octans
Octans
Octans is a faint constellation located in the deep Southern Sky. Its name is Latin for the eighth part of a circle, but it is named after the octant, a navigational instrument. Devised by French astronomer Nicolas Louis de Lacaille in 1752, Octans remains one of the 88 modern constellations. The southern celestial pole is located within the boundaries of Octans. History and mythology Octans was one of 14 constellations created by French astronomer Nicolas Louis de Lacaille during his expedition to the Cape of Good Hope, and was originally named l’Octans de Reflexion (“the reflecting octant”) in 1752, after he had observed and catalogued almost 10,000 southern stars during a two-year stay at the Cape of Good Hope. He devised fourteen new constellations in uncharted regions of the Southern Celestial Hemisphere not visible from Europe. All but one honoured instruments that symbolised the Age of Enlightenment. It was part of his catalogue of the southern sky, the Coelum Australe Stelliferum, which was published posthumously in 1763. In Europe, it became more widely known as Octans Hadleianus, in honor of English mathematician John Hadley, who invented the octant in 1730. There is no real mythology related to Octans, partially due to its faintness and relative recentness, but mostly because of its extreme southerly latitude. Notable features Stars Octans is a generally inconspicuous constellation with only one star brighter than magnitude 4; its brightest member is Nu Octantis, a spectral class K1 III giant star with an apparent magnitude 3.73. It is 63.3 ± 0.8 light-years distant from Earth. Beta Octantis is the second brightest star in the constellation. Polaris Australis (Sigma Octantis), the southern pole star, is a magnitude 5.4 star just over 1 degree away from the true south celestial pole. Its relative faintness means that it is not practical for navigation. BQ Octantis is a fainter, magnitude 6.82 star located much closer to the South Pole (at less than a degree) than Sigma. In addition to having the current southern pole star of Earth, Octans also contains the southern pole star of the planet Saturn, which is the magnitude 4.3 Delta Octantis. The Astronomical Society of Southern Africa in 2003 reported that observations of the Mira variable stars R and T Octantis were urgently needed. Four star systems are known to have planets. Mu2 Octantis is a binary star system, the brighter component of which has a planet. Nu Octantis A also has a planet orbiting. HD 142022 is a binary system, a component of which is a sunlike star with a massive planet with an orbital period of 1928 ± 46 days. HD 212301 is a yellow-white main sequence star with a hot jupiter that completes an orbit every 2.2 days. Deep sky objects NGC 2573 (also known as Polarissima Australis) is a faint barred spiral galaxy that happens to be the closest NGC object to the south celestial pole. NGC 7095 and NGC 7098 are two barred spiral galaxies that are 115 million and 95 million light-years distant from Earth respectively. The sparse open cluster Collinder 411 is also located in the constellation. Namesakes was a stores ship used by the United States Navy during World War II.
Physical sciences
Other
Astronomy
22479
https://en.wikipedia.org/wiki/Olive
Olive
The olive, botanical name Olea europaea, meaning 'European olive', is a species of small tree or shrub in the family Oleaceae, found traditionally in the Mediterranean Basin, with wild subspecies found further afield in Africa and western Asia. When in shrub form, it is known as Olea europaea Montra, dwarf olive, or little olive. The species is cultivated in all the countries of the Mediterranean, as well as in Australia, New Zealand, North and South America and South Africa. It is the type species for its genus, Olea. The tree and its fruit give their name to the Oleaceae plant family, which also includes species such as lilac, jasmine, forsythia, and the true ash tree. The olive's fruit, also called an "olive", is of major agricultural importance in the Mediterranean region as the source of olive oil; it is one of the core ingredients in Middle Eastern and Mediterranean cuisines. Thousands of cultivars of the olive tree are known. Olive cultivars may be used primarily for oil, eating, or both. Olives cultivated for consumption are generally referred to as "table olives". About 80% of all harvested olives are turned into oil, while about 20% are used as table olives. Etymology The word olive derives from Latin 'olive fruit; olive tree', possibly through Etruscan 𐌀𐌅𐌉𐌄𐌋𐌄 (eleiva) from the archaic Proto-Greek form *ἐλαίϝα (*elaíwa) (Classic Greek 'olive fruit; olive tree'. The word oil originally meant 'olive oil', from , ( 'olive oil'). The word for 'oil' in multiple other languages also ultimately derives from the name of this tree and its fruit. The oldest attested forms of the Greek words are Mycenaean , , and , or , , written in the Linear B syllabic script. Description The olive tree, Olea europaea, is an evergreen tree or shrub native to Mediterranean Europe, Asia, and Africa. It is short and squat and rarely exceeds in height. 'Pisciottana', a unique variety comprising 40,000 trees found only in the area around Pisciotta in the Campania region of southern Italy, often exceeds this, with correspondingly large trunk diameters. The silvery green leaves are oblong, measuring long and wide. The trunk is typically gnarled and twisted. The small, white, feathery flowers, with ten-cleft calyx and corolla, two stamens, and bifid stigma, are borne generally on the previous year's wood, in racemes springing from the axils of the leaves. The fruit is a small drupe long when ripe, thinner-fleshed and smaller in wild plants than in orchard cultivars. Olives are harvested in the green to purple stage. O. europaea contains a pyrena commonly referred to in American English as a "pit", and in British English as a "stone". Taxonomy The six natural subspecies of Olea europaea are distributed over a wide range: O. e. subsp. europaea (Mediterranean Basin) The subspecies europaea is divided into two varieties, the europaea, which was formerly named Olea sativa, with the seedlings called "olivasters", and silvestris, which corresponds to the old wildly growing Mediterranean species O. oleaster, with the seedlings called "oleasters". The sylvestris is characterized by a smaller, shrubby tree that produces smaller fruits and leaves. O. e. subsp. cuspidata (from South Africa throughout East Africa, Arabia to Southwest China) O. e. subsp. cerasiformis (Madeira); also known as Olea maderensis O. e. subsp. guanchica (Canary Islands) O. e. subsp. laperrinei (Algeria, Sudan, Niger) O. e. subsp. maroccana (Morocco) The subspecies O. e. cerasiformis is tetraploid, and O. e. maroccana is hexaploid. Wild-growing forms of the olive are sometimes treated as the species Olea oleaster, or "oleaster." The trees referred to as "white" and "black" olives in Southeast Asia are not actually olives but species of Canarium. Cultivars Hundreds of cultivars of the olive tree are known. An olive's cultivar has a significant impact on its colour, size, shape, and growth characteristics, as well as the qualities of olive oil. Olive cultivars may be used primarily for oil, eating, or both. Olives cultivated for consumption are generally referred to as "table olives". Since many olive cultivars are self-sterile or nearly so, they are generally planted in pairs with a single primary cultivar and a secondary cultivar selected for its ability to fertilize the primary one. In recent times, efforts have been directed at producing hybrid cultivars with qualities useful to farmers, such as resistance to disease, quick growth, and larger or more consistent crops. History Mediterranean Basin Fossil evidence indicates that the olive tree had its origins 20–40 million years ago in the Oligocene, in what now corresponds to Italy and the eastern Mediterranean Basin. Around 100,000 years ago, olives were used by humans in Africa, on the Atlantic coast of Morocco, for fuel and most probably for consumption. Wild olive trees, or oleasters, have been collected in the Eastern Mediterranean since ~19,000 BP. The genome of cultivated olives reflects their origin from oleaster populations in the Eastern Mediterranean. The olive plant was first cultivated some 7,000 years ago in Mediterranean regions. For thousands of years olives were grown primarily for lamp oil, with little regard for culinary flavor. Its origin can be traced to the Levant based on written tablets, olive pits, and wood fragments found in ancient tombs. As far back as 3000 BC, olives were grown commercially in Crete and may have been the source of the wealth of the Minoan civilization. The ancestry of the cultivated olive is unknown. Fossil olea pollen has been found in Macedonia and other places around the Mediterranean, indicating that this genus is an original element of the Mediterranean flora. Fossilized leaves of olea were found in the palaeosols of the volcanic Greek island of Santorini and dated to about 37,000 BP. Imprints of larvae of olive whitefly Aleurobus olivinus were found on the leaves. The same insect is commonly found today on olive leaves, showing that the plant-animal co-evolutionary relations have not changed since that time. Other leaves found on the same island are dated back to 60,000 BP, making them the oldest known olives from the Mediterranean. Outside the Mediterranean Olives are not native to the Americas. Spanish colonists brought the olive to the New World, where its cultivation prospered in present-day Peru, Chile, Uruguay and Argentina. The first seedlings from Spain were planted in Lima by Antonio de Rivera in 1560. Olive tree cultivation quickly spread along the valleys of South America's dry Pacific coast where the climate was similar to the Mediterranean. Spanish missionaries established the tree in the 18th century in California. It was first cultivated at Mission San Diego de Alcalá in 1769 or later around 1795. Orchards were started at other missions, but in 1838, an inspection found only two olive orchards in California. Cultivation for oil gradually became a highly successful commercial venture from the 1860s onward. In Japan, the first successful planting of olive trees happened in 1908 on Shodo Island, which became the cradle of olive cultivation in Japan. In 2016, olive oil production started in India, with olive saplings planted in Rajasthan's Thar Desert. Favoured by climate warming, several small-scale olive production farms have also been established at fairly high latitudes in Europe and North America since the early 21st century. There were an estimated 865 million olive trees in the world as of 2005, and the vast majority of these were found in Mediterranean countries, with traditionally marginal areas accounting for no more than 25% of olive-planted area and 10% of oil production. Symbolic connotations Ancient Greece Olives are thought to have been domesticated in the third millennium BC at the latest, at which point they, along with grain and grapes, became part of Colin Renfrew's Mediterranean triad of staple crops that fueled the emergence of more complex societies. Olives, and especially (perfumed) olive oil, became a major export product during the Minoan and Mycenaean periods. Dutch archaeologist Jorrit Kelder proposed that the Mycenaeans sent shipments of olive oil, probably alongside live olive branches, to the court of the Egyptian pharaoh Akhenaten as a diplomatic gift. In Egypt, these imported olive branches may have acquired ritual meanings, as they are depicted as offerings on the wall of the Aten temple and were used in wreaths for the burial of Tutankhamun. It is likely that, as well as being used for culinary purposes, olive oil was also used to various other ends, including as a perfume. The ancient Greeks smeared olive oil on their bodies and hair as a matter of grooming and good health. Olive oil was used to anoint kings and athletes in ancient Greece. It was burnt in the sacred lamps of temples and was the "eternal flame" of the original Olympic games. Victors in these games were crowned with its leaves. In Homer's Odyssey, Odysseus crawls beneath two shoots of olive that grow from a single stock, and in the Iliad, (XVII.53ff) there is a metaphoric description of a lone olive tree in the mountains, by a spring; the Greeks observed that the olive rarely thrives at a distance from the sea, which in Greece invariably means up mountain slopes. Greek myth attributed to the primordial culture-hero Aristaeus the understanding of olive husbandry, along with cheese-making and bee-keeping. Olive was one of the woods used to fashion the most primitive Greek cult figures, called xoana, referring to their wooden material; they were reverently preserved for centuries. In an archaic Athenian foundation myth, Athena won the patronage of Attica from Poseidon with the gift of the olive. According to the fourth-century BC father of botany, Theophrastus, olive trees ordinarily attained an age around 200 years, he mentions that the very olive tree of Athena still grew on the Acropolis; it was still to be seen there in the second century AD; and when Pausanias was shown it c. 170 AD, he reported "Legend also says that when the Persians fired Athens the olive was burnt down, but on the very day it was burnt it grew again to the height of two cubits." Indeed, olive suckers sprout readily from the stump, and the great age of some existing olive trees shows that it was possible that the olive tree of the Acropolis dated to the Bronze Age. The olive was sacred to Athena and appeared on the Athenian coinage. According to another myth, Elaea was an accomplished athlete killed by her fellow athletes who had grown envious of her; but Athena and Gaia turned her into an olive tree as reward. Theophrastus, in On the Causes of Plants, states that the cultivated olive must be vegetatively propagated; indeed, the pits give rise to thorny, wild-type olives, spread far and wide by birds. Theophrastus reports how the bearing olive can be grafted on the wild olive, for which the Greeks had a separate name, kotinos. In his Enquiry into Plants, he states that the olive can be propagated from a piece of the trunk, the root, a twig, or a stake. Ancient Rome According to Pliny the Elder a vine, a fig tree and an olive tree grew in the middle of the Roman Forum; the olive was planted to provide shade. (The garden was recreated in the 20th century). The Roman poet Horace mentions it in reference to his own diet, which he describes as very simple: "As for me, olives, endives, and smooth mallows provide sustenance." Lord Monboddo comments on the olive in 1779 as one of the foods preferred by the ancients and as one of the most perfect foods. Vitruvius describes of the use of charred olive wood in tying together walls and foundations in his De Architectura. Judaism and Israel Olives were one of the main elements in ancient Israelite cuisine. Olive oil was used for not only food and cooking, but also lighting, sacrificial offerings, ointment, and anointment for priestly or royal office. The olive tree is one of the first plants mentioned in the Hebrew Bible, and one of the most significant. An olive branch (or leaf, depending on translation) was brought back to Noah by a dove to demonstrate that the flood was over (Book of Genesis 8:11). The olive is listed in Deuteronomy 8:8 as one of the seven species that are noteworthy products of the Land of Israel. According to the Halakha, the Jewish law mandatory for all Jews, the olive is one of the seven species that require the recitation of me'eyn shalosh after they are consumed. Olive oil is also the most recommended and best possible oil for the lighting of the Shabbat candles. Due to its importance in the Hebrew Bible, the olive has significant national meaning in modern Israeli culture. Two olive branches appear as part of Israel's emblem and the olive tree is Israel's national tree. Christianity Apart from being mentioned in the Hebrew Bible (the Christian Old Testament), olives play an important role in Christianity. The Mount of Olives, east of Jerusalem, is mentioned several times in the New Testament. The Allegory of the Olive Tree in St Paul's Epistle to the Romans refers to the scattering and gathering of Israel. It compares the Israelites to a tame olive tree and the Gentiles to a wild olive branch. The olive tree itself, as well as olive oil and olives, play an important role in the Bible. Islam The olive tree and olive oil are mentioned seven times in the Quran, and the olive is praised as a precious fruit. Olive tree and olive oil health benefits have been propounded in prophetic medicine. Muhammad is reported to have said: "Take oil of olive and massage with it – it is a blessed tree" (Sunan al-Darimi, 69:103). Olives are substitutes for dates (if not available) during Ramadan fasting, and olive tree leaves are used as incense in some Muslim Mediterranean countries. Palestine In Palestine the olive tree and plant carry the symbolic connotations of resilience, health, ancestral ties and community. Researchers have found that the olive tree is tied into the Palestinians' Sutra, A’wana and Sumud. The tree is a means of survival and security, represents their bond to their land, community and animals. Olive trees also serve as a symbol of their identities, which include their physical and emotional aspects and their socio-cultural values. Palestinian people view the olive trees as the first witnesses that Palestine is their homeland. The harvest season is referred to as "Palestine's wedding" and is considered a national holiday when schools close for two days so that pupils and teachers can join in the harvest. This holiday allows community and family members to gather and serves as a ritual that encompasses their values surrounding family, labour, community and aid for other members of the community that do not possess land. This is practised through the tradition of leaving fruit on a tree during the harvest so that those who do not have land and are unable to take part in the harvest can still reap the benefits. United States The Great Seal of the United States first used in 1782 depicts an eagle clutching an olive branch in one of its talons, indicating the power of peace. United Nations The Flag of the United Nations adopted in 1946 is a world map with two olive branches. Oldest known trees An olive tree in Mouriscas, Abrantes, Portugal, (Oliveira do Mouchão) is one of the oldest known olive trees still alive to this day, with an estimated age of 3,350 years, planted approximately at the beginning of the Atlantic Bronze Age. An olive tree in the city of Bar in Montenegro has an estimated age of between 2,014 and 2,480 years. An olive tree on the island of Brijuni in Croatia has a radiocarbon dating age of about 1,600 years. It still gives fruit (about per year), which is made into olive oil. An olive tree in west Athens, named Plato's Olive Tree, is thought to be a remnant of the grove where Plato's Academy was situated, making it an estimated 2,400 years old. The tree consisted of a cavernous trunk from which a few branches were still sprouting in 1975 when a traffic accident caused a bus to uproot it. Following that the trunk was preserved and displayed in the nearby Agricultural University of Athens. In 2013 it was reported that the remaining part of the trunk was uprooted and stolen, allegedly to serve as firewood. The age of an olive tree in Crete, the Finix Olive, is claimed to be more than 2,000 years, based on archaeological evidence around the tree. The olive tree of Vouves in Crete has an age estimated at between 2,000 and 4,000 years. An olive tree called Farga d'Arió in Ulldecona, Catalonia, Spain, has been estimated (with laser-perimetry methods) to date back to 314 AD, which would mean that it was planted when Constantine the Great was Roman emperor. Some Italian olive trees are believed to date back to Ancient Rome (8th century BC to 5th century AD), although identifying progenitor trees in ancient sources is difficult. There are other trees about 1,000 years old in the same garden. The 15th-century trees of Olivo della Linza, at Alliste in the Province of Lecce in Apulia on the Italian mainland, were noted by Bishop Ludovico de Pennis during his pastoral visit to the Diocese of Nardò-Gallipoli in 1452. The village of Bcheale, Lebanon, claims to have the oldest olive trees in the world (4000 BC for the oldest), but no scientific study supports these claims. Other trees in the towns of Amioun appear to be at least 1,500 years old. Several trees in the Garden of Gethsemane (from the Hebrew words gat shemanim or olive press) in Jerusalem are claimed to date back to the time of Jesus. A study conducted by the National Research Council of Italy in 2012 used carbon dating on older parts of the trunks of three trees from Gethsemane and came up with the dates of 1092, 1166 and 1198 AD, while DNA tests show that the trees were originally planted from the same parent plant. According to molecular analysis, the tested trees showed the same allelic profile at all microsatellite loci analyzed, which furthermore may indicate attempt to keep the lineage of an older species intact. However, Bernabei writes, "All the tree trunks are hollow inside so that the central, older wood is missing... In the end, only three from a total of eight olive trees could be successfully dated. The dated ancient olive trees do not, however, allow any hypothesis to be made with regard to the age of the remaining five giant olive trees." Babcox concludes, "The roots of the eight oldest trees are possibly much older. Visiting guides to the garden often state that they are two thousand years old." The 2,000-year-old Bidni olive trees on Malta, which have been confirmed through carbon dating, have been protected since 1933 and are listed in UNESCO's Database of National Cultural Heritage Laws. In 2011, after recognising their historical and landscape value, and in recognition of the fact that "only 20 trees remain from 40 at the beginning of the 20th century", Maltese authorities declared the ancient Bidni olive grove at Bidnija as a Tree Protected Area. Uses The olive tree, Olea europaea, has been cultivated for olive oil, fine wood, olive leaf, ornamental reasons, and the olive fruit. About 80% of all harvested olives are turned into oil, while about 20% are used as table olives. The olive is one of the "trinity" or "triad" of basic ingredients in Mediterranean cuisine, the other two being wheat for bread, pasta, and couscous; and the grape for wine. Olive oil Olive oil is a liquid fat obtained from olives, produced by pressing whole olives and extracting the oil. It is commonly used in cooking, for frying foods or as a salad dressing. It is also used in cosmetics, pharmaceuticals, and soaps, and as a fuel for traditional oil lamps, and has additional uses in some religions. Spain accounts for almost half of global olive oil production; other major producers are Portugal, Italy, Tunisia, Greece and Turkey. Per capita consumption is highest in Greece, followed by Italy and Spain. The composition of olive oil varies with the cultivar, elevation, time of harvest and extraction process. It consists mainly of oleic acid (up to 83%), with smaller amounts of other fatty acids including linoleic acid (up to 21%) and palmitic acid (up to 20%). Extra virgin olive oil is required to have no more than 0.8% free acidity and fruity flavor characteristics. Table olives Table olives are classified by the International Olive Council (IOC) into three groups according to the degree of ripeness achieved before harvesting: Green olives are picked when they have obtained full size, while unripe; they are usually shades of green to yellow and contain the bitter phytochemical oleuropein. Semi-ripe or turning-colour olives are picked at the beginning of the ripening cycle, when the colour has begun to change from green to multicolour shades of red to brown. Only the skin is coloured, as the flesh of the fruit lacks pigmentation at this stage, unlike that of ripe olives. Black olives or ripe olives are picked at full maturity when fully ripe, displaying colours of purple, brown or black. To leach the oleuropein from olives, commercial producers use lye, which neutralizes the bitterness of oleuropein, producing a mild flavour and soft texture characteristic of California black olives sold in cans. Such olives are typically preserved in brine and sterilized under high heat during the canning process. Fermentation and curing Raw or fresh olives are naturally very bitter; to make them palatable, olives must be cured and fermented, thereby removing oleuropein, a bitter phenolic compound that can reach levels of 14% of dry matter in young olives. In addition to oleuropein, other phenolic compounds render freshly picked olives unpalatable and must also be removed or lowered in quantity through curing and fermentation. Generally speaking, phenolics reach their peak in young fruit and are converted as the fruit matures. Once ripening occurs, the levels of phenolics sharply decline through their conversion to other organic products, which render some cultivars edible immediately. One example of an edible olive native to the island of Thasos is the throubes black olive, which becomes edible when allowed to ripen in the sun, shrivel, and fall from the tree. The curing process may take from a few days with lye, to a few months with brine or salt packing. With the exception of California style and salt-cured olives, all methods of curing involve a major fermentation involving bacteria and yeast that is of equal importance to the final table olive product. Traditional cures, using the natural microflora on the fruit to induce fermentation, lead to two important outcomes: the leaching out and breakdown of oleuropein and other unpalatable phenolic compounds, and the generation of favourable metabolites from bacteria and yeast, such as organic acids, probiotics, glycerol, and esters, which affect the sensory properties of the final table olives. Mixed bacterial/yeast olive fermentations may have probiotic qualities. Lactic acid is the most important metabolite, as it lowers the pH, acting as a natural preservative against the growth of unwanted pathogenic species. The result is table olives which can be stored without refrigeration. Fermentations dominated by lactic acid bacteria are, therefore, the most suitable method of curing olives. Yeast-dominated fermentations produce a different suite of metabolites which provide poorer preservation, so they are corrected with an acid such as citric acid in the final processing stage to provide microbial stability. The many types of preparations for table olives depend on local tastes and traditions. The most important commercial examples are listed below. Lebanese or Phoenician fermentation Applied to green, semiripe, or ripe olives. Olives are soaked in salt water for 24–48 hours. Then they are slightly crushed with a rock to hasten the fermentation process. The olives are stored for a period of up to a year in a container with salt water, lemon juice, lemon peels, laurel and olive leaves, and rosemary. Some recipes may contain white vinegar or olive oil. Spanish or Sevillian fermentation Most commonly applied to green olive preparation, around 60% of all the world's table olives are produced with this method. Olives are soaked in lye (dilute NaOH, 2–4%) for 8–10 hours to hydrolyse the oleuropein. They are usually considered "treated" when the lye has penetrated two-thirds of the way into the fruit. They are then washed once or several times in water to remove the caustic solution and transferred to fermenting vessels full of brine at typical concentrations of 8–12% NaCl. The brine is changed on a regular basis to help remove the phenolic compounds. Fermentation is carried out by the natural microbiota present on the olives that survive the lye treatment process. Many organisms are involved, usually reflecting the local conditions or terroir of the olives. During a typical fermentation gram-negative enterobacteria flourish in small numbers at first but are rapidly outgrown by lactic acid bacteria species such as Leuconostoc mesenteroides, Lactobacillus plantarum, Lactobacillus brevis and Pediococcus damnosus. These bacteria produce lactic acid to help lower the pH of the brine and therefore stabilize the product against unwanted pathogenic species. A diversity of yeasts then accumulate in sufficient numbers to help complete the fermentation alongside the lactic acid bacteria. Yeasts commonly mentioned include the teleomorphs Pichia anomala, Pichia membranifaciens, Debaryomyces hansenii and Kluyveromyces marxianus. Once fermented, the olives are placed in fresh brine and acid corrected, to be ready for market. Sicilian or Greek fermentation Applied to green, semiripe and ripe olives, they are almost identical to the Spanish type fermentation process, but the lye treatment process is skipped and the olives are placed directly in fermentation vessels full of brine (8–12% NaCl). The brine is changed on a regular basis to help remove the phenolic compounds. As the caustic treatment is avoided, lactic acid bacteria are only present in similar numbers to yeast and appear to be outdone by the abundant yeasts found on untreated olives. As very little acid is produced by the yeast fermentation, lactic, acetic, or citric acid is often added to the fermentation stage to stabilize the process. Picholine or directly brined fermentation Applied to green, semi-ripe, or ripe olives, they are soaked in lye typically for longer periods than Spanish style (e.g. 10–72 hours) until the solution has penetrated three-quarters of the way into the fruit. They are then washed and immediately brined and acid corrected with citric acid to achieve microbial stability. Fermentation still occurs carried out by acidogenic yeast and bacteria but is more subdued than other methods. The brine is changed on a regular basis to help remove the phenolic compounds, and a series of progressively stronger concentrations of salt are added until the product is fully stabilized and ready to be eaten. Water-cured fermentation Applied to green, semi-ripe, or ripe olives, these are soaked in water or weak brine and this solution is changed on a daily basis for 10–14 days. The oleuropein is naturally dissolved and leached into the water and removed during a continual soak-wash cycle. Fermentation takes place during the water treatment stage and involves a mixed yeast/bacteria ecosystem. Sometimes, the olives are lightly cracked with a blunt instrument to trigger fermentation and speed up the fermentation process. Once debittered, the olives are brined to concentrations of 8–12% NaCl and acid corrected and are then ready to eat. Salt-cured fermentation Applied only to ripe olives, since it is only a light fermentation. They are usually produced in Morocco, Turkey, and other eastern Mediterranean countries. Once picked, the olives are vigorously washed and packed in alternating layers with salt. The high concentration of salt draws the moisture out of olives, dehydrating and shriveling them until they look somewhat analogous to a raisin. Once packed in salt, fermentation is minimal and only initiated by the most halophilic yeast species such as Debaryomyces hansenii. Once cured, they are sold in their natural state without any additives. So-called oil-cured olives are cured in salt, and then soaked in oil. California or artificial ripening Applied to green and semi-ripe olives, they are placed in lye and soaked. Upon their removal, they are washed in water injected with compressed air, without fermentation. This process is repeated several times until both oxygen and lye have soaked through to the pit. The repeated, saturated exposure to air oxidises the skin and flesh of the fruit, turning it black in an artificial process that mimics natural ripening. Once fully oxidised or "blackened", they are brined and acid corrected and are then ready for eating. Olive wood Olive wood is very hard and tough and is prized for its durability, colour, high combustion temperature, and interesting grain patterns. Because of the commercial importance of the fruit, slow growth, and relatively small size of the tree, olive wood and its products are relatively expensive. Common uses of olive wood include: kitchen utensils, carved wooden bowls, cutting boards, fine furniture, and decorative items. The yellow or light greenish-brown wood is often finely veined with a darker tint; being very hard and close-grained, it is valued by woodworkers. Ornamental uses In modern landscape design olive trees are frequently used as ornamental features for their distinctively gnarled trunks and evergreen silvery-gray foliage. Cultivation The earliest evidence for the domestication of olives comes from the Chalcolithic period archaeological site of Teleilat el Ghassul in modern Jordan. Farmers in ancient times believed that olive trees would not grow well if planted more than a certain distance from the sea; Theophrastus gives 300 stadia () as the limit. Modern experience does not always confirm this, and, though showing a preference for the coast, they have long been grown further inland in some areas with suitable climates, particularly in the southwestern Mediterranean (Iberia and northwest Africa) where winters are mild. An article on olive tree cultivation in Spain is brought down in Ibn al-'Awwam's 12th-century agricultural work, Book on Agriculture. Olives are cultivated in many regions of the world with Mediterranean climates, such as South Africa, Chile, Peru, Pakistan, Australia, Oregon, and California, and in areas with temperate climates such as New Zealand. They are also grown in the Córdoba Province, Argentina, which has a temperate climate. Growth and propagation Olive trees show a marked preference for calcareous soils, flourishing best on limestone slopes and crags, and coastal climate conditions. They grow in any light soil, even on clay if well drained, but in rich soils, they are predisposed to disease and produce poor quality oil. (This was noted by Pliny the Elder.) Olives like hot weather and sunny positions without any shade, while temperatures below may injure even a mature tree. They tolerate drought well because of their sturdy and extensive root systems. Olive trees can remain productive for centuries as long as they are pruned correctly and regularly. Only a handful of olive varieties can be used to cross-pollinate. 'Pendolino' olive trees are partially self-fertile, but pollenizers are needed for a large fruit crop. Other compatible olive tree pollinators include 'Leccino' and 'Maurino'. 'Pendolino' olive trees are used extensively as pollinizers in large olive tree groves. Olives are propagated by various methods. The preferred ways are cuttings and layers; the tree roots easily in favourable soil and throws up suckers from the stump when cut down. However, yields from trees grown from suckers or seeds are poor; they must be budded or grafted onto other specimens to do well. Branches of various thickness cut into lengths around planted deeply in manured ground soon vegetate. Shorter pieces are sometimes laid horizontally in shallow trenches and, when covered with a few centimetres of soil, rapidly throw up sucker-like shoots. In Greece, grafting the cultivated tree on the wild tree is a common practice. In Italy, embryonic buds, which form small swellings on the stems, are carefully excised and planted under the soil surface, where they soon form a vigorous shoot. The olive is also sometimes grown from seed. To facilitate germination, the oily pericarp is first softened by slight rotting, or soaked in hot water or in an alkaline solution. In situations where extreme cold has damaged or killed the olive tree, the rootstock can survive and produce new shoots which in turn become new trees. In this way, olive trees can regenerate themselves. In Tuscany in 1985, a very severe frost destroyed many productive and aged olive trees and ruined many farmers' livelihoods. However, new shoots appeared in the spring and, once the dead wood was removed, became the basis for new fruit-producing trees. Olives grow very slowly, and over many years, the trunk can attain a considerable diameter. A. P. de Candolle recorded one exceeding in girth. The trees rarely exceed in height and are generally confined to much more limited dimensions by frequent pruning. Olives are very hardy and are resistant to disease and fire. Its root system is robust and capable of regenerating the tree even if the above-ground structure is destroyed. The crop from old trees is sometimes enormous, but they seldom bear well two years in succession, and in many cases, a large harvest occurs every sixth or seventh season. Where the olive is carefully cultivated, as in Liguria, Languedoc, and Provence, the trees are regularly pruned. The pruning preserves the flower-bearing shoots of the preceding year, while keeping the tree low enough to allow the easy gathering of the fruit. The spaces between the trees are regularly fertilized. Pests, diseases, and weather Various pathologies can affect olives. The most serious pest is the olive fruit fly (Dacus oleae or Bactrocera oleae) which lays its eggs in the olive most commonly just before it becomes ripe in the autumn. The region surrounding the puncture rots, becomes brown, and takes a bitter taste, making the olive unfit for eating or for oil. For controlling the pest, the practice has been to spray with insecticides (organophosphates, e.g. dimethoate). Classic organic methods have been applied such as trapping, applying the bacterium Bacillus thuringiensis, and spraying with kaolin. Such methods are obligatory for organic olives. A fungus, Cycloconium oleaginum, can infect the trees for several successive seasons, causing great damage to plantations. A species of bacterium, Pseudomonas savastanoi pv. oleae, induces tumour growth in the shoots. Certain lepidopterous caterpillars feed on the leaves and flowers. Xylella fastidiosa bacteria, which can also infect citrus fruit and vines, has attacked olive trees in Apulia, southern Italy, causing olive quick decline syndrome (OQDS). The main vector is Philaenus spumarius (meadow spittlebug). A pest that spreads through olive trees is the black scale bug, a small black scale insect that resembles a small black spot. They attach themselves firmly to olive trees and reduce the quality of the fruit; their main predators are wasps. The curculio beetle eats the edges of leaves, leaving sawtooth damage. Rabbits eat the bark of olive trees and can do considerable damage, especially to young trees. If the bark is removed around the entire circumference of a tree, it is likely to die. Voles and mice also do damage by eating the roots. At the northern edge of their cultivation zone, for instance in northern Italy, southern France and Switzerland, olive trees suffer occasionally from frost. Gales and long-continued rains during the gathering season also cause damage. In the colder Mediterranean hinterland, olive cultivation is replaced by other fruits, typically the chestnut. As an invasive species Since its first domestication, O. europaea has been spreading back to the wild from planted groves. Its original wild populations in southern Europe have been largely swamped by feral plants. In some other parts of the world where it has been introduced, most notably South Australia, the olive has become a major weed that displaces native vegetation. In South Australia, its seeds are spread by the introduced red fox and by many bird species, including the European starling and the native emu, into woodlands, where they germinate and eventually form a dense canopy that prevents regeneration of native trees. As the climate of South Australia is very dry and bushfire prone, the oil-rich feral olive tree substantially increases the fire hazard of native sclerophyll woodlands. Harvesting Olives are harvested in the autumn and winter. More specifically in the Northern Hemisphere, green olives are picked from the end of September to about the middle of November. In the Southern Hemisphere, green olives are picked from the middle of October to the end of November, and black olives are collected worldwide from the middle of November to the end of January or early February. In southern Europe, harvesting is done for several weeks in winter, but the time varies in each country, and with the season and the cultivar. Most olives today are harvested by shaking the boughs or the whole tree. Using olives found lying on the ground can result in poor quality oil, due to damage. Another method involves standing on a ladder and "milking" the olives into a sack tied around the harvester's waist. This method produces high quality oil. A third method uses a device called an oli-net that wraps around the tree trunk and opens to form an umbrella-like catcher from which workers collect the fruit. Another method uses an electric tool with large tongs that spin around quickly, removing fruit from the tree. Table olive varieties are more difficult to harvest, as workers must take care not to damage the fruit; baskets that hang around the worker's neck are used. In some places in Italy, Croatia, and Greece, olives are harvested by hand because the terrain is too mountainous for machines. As a result, the fruit is not bruised, which leads to a superior finished product. The method also involves sawing off branches, which is healthy for future production. The amount of oil contained in the fruit differs greatly by cultivar; the pericarp is usually 60–70% oil. Typical yields are of oil per tree per year. Global production Olives are one of the most extensively cultivated fruit crops in the world. In 2011, about were planted with olive trees, which is more than twice the amount of land devoted to apples, bananas, or mangoes. Only coconut trees and oil palms command more space. Cultivation area tripled from between 1960 and 1998 and reached a peak of in 2008. Olive production in the Mediterranean region has declined since 2019 due to climate, economic and political factors. The 10 most-producing countries, according to the Food and Agriculture Organization, are all located in the Mediterranean region and produce 95% of the world's olives. In Italy, cultivation of olive trees is widespread in the south, counting for three quarters of its production. Due to the climate, it is less abundant in the north of Italy, although growth has increased, particularly in the more temperate microclimates of Liguria and the hills around Lake Garda. Approximately 170 million plants distributed over 1 million farms. Nutrition One hundred grams of cured green olives provide 146 calories, are a rich source of vitamin E (25% of the Daily Value, DV), and contain a large amount of sodium (104% DV); other nutrients are insignificant. Green olives are 75% water, 15% fat, 4% carbohydrates and 1% protein (table). Phytochemicals The polyphenol composition of olive fruits varies during fruit ripening and during processing by fermentation when olives are immersed whole in brine or crushed to produce oil. In raw fruit, total polyphenol contents, as measured by the Folin method, are 117 mg/100 g in black olives and 161 mg/100 g in green olives, compared to 55 and 21 mg/100 g for extra virgin and virgin olive oil, respectively. Olive fruit contains several types of polyphenols, mainly tyrosols, phenolic acids, flavonols and flavones, and for black olives, anthocyanins. The main bitter flavor of olives before curing results from oleuropein and its aglycone which total in content, respectively, 72 and 82 mg/100 g in black olives, and 56 and 59 mg/100 g in green olives. During the crushing, kneading and extraction of olive fruit to obtain olive oil, oleuropein, demethyloleuropein and ligstroside are hydrolyzed by endogenous beta-glucosidases to form aldehydes, dialdehydes, and aldehydic aglycones. Polyphenol content also varies with olive cultivar and the manner of presentation, with plain olives having higher contents than those that are pitted or stuffed. Allergenic potential Olive tree pollen is extremely allergenic, with an OPALS allergy scale rating of 10 out of 10. Olea europaea is primarily wind-pollinated and its buoyant pollen is a strong trigger for asthma. One popular variety, "Swan Hill", is widely sold as an "allergy-free" olive tree; however, this variety does bloom and produce allergenic pollen.
Biology and health sciences
Lamiales
null
22483
https://en.wikipedia.org/wiki/Optics
Optics
Optics is the branch of physics that studies the behaviour and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics usually describes the behaviour of visible, ultraviolet, and infrared light. Light is a type of electromagnetic radiation, and other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties. Most optical phenomena can be accounted for by using the classical electromagnetic description of light, however complete electromagnetic descriptions of light are often difficult to apply in practice. Practical optics is usually done using simplified models. The most common of these, geometric optics, treats light as a collection of rays that travel in straight lines and bend when they pass through or reflect from surfaces. Physical optics is a more comprehensive model of light, which includes wave effects such as diffraction and interference that cannot be accounted for in geometric optics. Historically, the ray-based model of light was developed first, followed by the wave model of light. Progress in electromagnetic theory in the 19th century led to the discovery that light waves were in fact electromagnetic radiation. Some phenomena depend on light having both wave-like and particle-like properties. Explanation of these effects requires quantum mechanics. When considering light's particle-like properties, the light is modelled as a collection of particles called "photons". Quantum optics deals with the application of quantum mechanics to optical systems. Optical science is relevant to and studied in many related disciplines including astronomy, various engineering fields, photography, and medicine (particularly ophthalmology and optometry, in which it is called physiological optics). Practical applications of optics are found in a variety of technologies and everyday objects, including mirrors, lenses, telescopes, microscopes, lasers, and fibre optics. History Optics began with the development of lenses by the ancient Egyptians and Mesopotamians. The earliest known lenses, made from polished crystal, often quartz, date from as early as 2000 BC from Crete (Archaeological Museum of Heraclion, Greece). Lenses from Rhodes date around 700 BC, as do Assyrian lenses such as the Nimrud lens. The ancient Romans and Greeks filled glass spheres with water to make lenses. These practical developments were followed by the development of theories of light and vision by ancient Greek and Indian philosophers, and the development of geometrical optics in the Greco-Roman world. The word optics comes from the ancient Greek word , . Greek philosophy on optics broke down into two opposing theories on how vision worked, the intromission theory and the emission theory. The intromission approach saw vision as coming from objects casting off copies of themselves (called eidola) that were captured by the eye. With many propagators including Democritus, Epicurus, Aristotle and their followers, this theory seems to have some contact with modern theories of what vision really is, but it remained only speculation lacking any experimental foundation. Plato first articulated the emission theory, the idea that visual perception is accomplished by rays emitted by the eyes. He also commented on the parity reversal of mirrors in Timaeus. Some hundred years later, Euclid (4th–3rd century BC) wrote a treatise entitled Optics where he linked vision to geometry, creating geometrical optics. He based his work on Plato's emission theory wherein he described the mathematical rules of perspective and described the effects of refraction qualitatively, although he questioned that a beam of light from the eye could instantaneously light up the stars every time someone blinked. Euclid stated the principle of shortest trajectory of light, and considered multiple reflections on flat and spherical mirrors. Ptolemy, in his treatise Optics, held an extramission-intromission theory of vision: the rays (or flux) from the eye formed a cone, the vertex being within the eye, and the base defining the visual field. The rays were sensitive, and conveyed information back to the observer's intellect about the distance and orientation of surfaces. He summarized much of Euclid and went on to describe a way to measure the angle of refraction, though he failed to notice the empirical relationship between it and the angle of incidence. Plutarch (1st–2nd century AD) described multiple reflections on spherical mirrors and discussed the creation of magnified and reduced images, both real and imaginary, including the case of chirality of the images. During the Middle Ages, Greek ideas about optics were resurrected and extended by writers in the Muslim world. One of the earliest of these was Al-Kindi (–873) who wrote on the merits of Aristotelian and Euclidean ideas of optics, favouring the emission theory since it could better quantify optical phenomena. In 984, the Persian mathematician Ibn Sahl wrote the treatise "On burning mirrors and lenses", correctly describing a law of refraction equivalent to Snell's law. He used this law to compute optimum shapes for lenses and curved mirrors. In the early 11th century, Alhazen (Ibn al-Haytham) wrote the Book of Optics (Kitab al-manazir) in which he explored reflection and refraction and proposed a new system for explaining vision and light based on observation and experiment. He rejected the "emission theory" of Ptolemaic optics with its rays being emitted by the eye, and instead put forward the idea that light reflected in all directions in straight lines from all points of the objects being viewed and then entered the eye, although he was unable to correctly explain how the eye captured the rays. Alhazen's work was largely ignored in the Arabic world but it was anonymously translated into Latin around 1200 A.D. and further summarised and expanded on by the Polish monk Witelo making it a standard text on optics in Europe for the next 400 years. In the 13th century in medieval Europe, English bishop Robert Grosseteste wrote on a wide range of scientific topics, and discussed light from four different perspectives: an epistemology of light, a metaphysics or cosmogony of light, an etiology or physics of light, and a theology of light, basing it on the works of Aristotle and Platonism. Grosseteste's most famous disciple, Roger Bacon, wrote works citing a wide range of recently translated optical and philosophical works, including those of Alhazen, Aristotle, Avicenna, Averroes, Euclid, al-Kindi, Ptolemy, Tideus, and Constantine the African. Bacon was able to use parts of glass spheres as magnifying glasses to demonstrate that light reflects from objects rather than being released from them. The first wearable eyeglasses were invented in Italy around 1286. This was the start of the optical industry of grinding and polishing lenses for these "spectacles", first in Venice and Florence in the thirteenth century, and later in the spectacle making centres in both the Netherlands and Germany. Spectacle makers created improved types of lenses for the correction of vision based more on empirical knowledge gained from observing the effects of the lenses rather than using the rudimentary optical theory of the day (theory which for the most part could not even adequately explain how spectacles worked). This practical development, mastery, and experimentation with lenses led directly to the invention of the compound optical microscope around 1595, and the refracting telescope in 1608, both of which appeared in the spectacle making centres in the Netherlands. In the early 17th century, Johannes Kepler expanded on geometric optics in his writings, covering lenses, reflection by flat and curved mirrors, the principles of pinhole cameras, inverse-square law governing the intensity of light, and the optical explanations of astronomical phenomena such as lunar and solar eclipses and astronomical parallax. He was also able to correctly deduce the role of the retina as the actual organ that recorded images, finally being able to scientifically quantify the effects of different types of lenses that spectacle makers had been observing over the previous 300 years. After the invention of the telescope, Kepler set out the theoretical basis on how they worked and described an improved version, known as the Keplerian telescope, using two convex lenses to produce higher magnification. Optical theory progressed in the mid-17th century with treatises written by philosopher René Descartes, which explained a variety of optical phenomena including reflection and refraction by assuming that light was emitted by objects which produced it. This differed substantively from the ancient Greek emission theory. In the late 1660s and early 1670s, Isaac Newton expanded Descartes's ideas into a corpuscle theory of light, famously determining that white light was a mix of colours that can be separated into its component parts with a prism. In 1690, Christiaan Huygens proposed a wave theory for light based on suggestions that had been made by Robert Hooke in 1664. Hooke himself publicly criticised Newton's theories of light and the feud between the two lasted until Hooke's death. In 1704, Newton published Opticks and, at the time, partly because of his success in other areas of physics, he was generally considered to be the victor in the debate over the nature of light. Newtonian optics was generally accepted until the early 19th century when Thomas Young and Augustin-Jean Fresnel conducted experiments on the interference of light that firmly established light's wave nature. Young's famous double slit experiment showed that light followed the superposition principle, which is a wave-like property not predicted by Newton's corpuscle theory. This work led to a theory of diffraction for light and opened an entire area of study in physical optics. Wave optics was successfully unified with electromagnetic theory by James Clerk Maxwell in the 1860s. The next development in optical theory came in 1899 when Max Planck correctly modelled blackbody radiation by assuming that the exchange of energy between light and matter only occurred in discrete amounts he called quanta. In 1905, Albert Einstein published the theory of the photoelectric effect that firmly established the quantization of light itself. In 1913, Niels Bohr showed that atoms could only emit discrete amounts of energy, thus explaining the discrete lines seen in emission and absorption spectra. The understanding of the interaction between light and matter that followed from these developments not only formed the basis of quantum optics but also was crucial for the development of quantum mechanics as a whole. The ultimate culmination, the theory of quantum electrodynamics, explains all optics and electromagnetic processes in general as the result of the exchange of real and virtual photons. Quantum optics gained practical importance with the inventions of the maser in 1953 and of the laser in 1960. Following the work of Paul Dirac in quantum field theory, George Sudarshan, Roy J. Glauber, and Leonard Mandel applied quantum theory to the electromagnetic field in the 1950s and 1960s to gain a more detailed understanding of photodetection and the statistics of light. Classical optics Classical optics is divided into two main branches: geometrical (or ray) optics and physical (or wave) optics. In geometrical optics, light is considered to travel in straight lines, while in physical optics, light is considered as an electromagnetic wave. Geometrical optics can be viewed as an approximation of physical optics that applies when the wavelength of the light used is much smaller than the size of the optical elements in the system being modelled. Geometrical optics Geometrical optics, or ray optics, describes the propagation of light in terms of "rays" which travel in straight lines, and whose paths are governed by the laws of reflection and refraction at interfaces between different media. These laws were discovered empirically as far back as 984 AD and have been used in the design of optical components and instruments from then until the present day. They can be summarised as follows: When a ray of light hits the boundary between two transparent materials, it is divided into a reflected and a refracted ray. The law of reflection says that the reflected ray lies in the plane of incidence, and the angle of reflection equals the angle of incidence. The law of refraction says that the refracted ray lies in the plane of incidence, and the sine of the angle of incidence divided by the sine of the angle of refraction is a constant: where is a constant for any two materials and a given colour of light. If the first material is air or vacuum, is the refractive index of the second material. The laws of reflection and refraction can be derived from Fermat's principle which states that the path taken between two points by a ray of light is the path that can be traversed in the least time. Approximations Geometric optics is often simplified by making the paraxial approximation, or "small angle approximation". The mathematical behaviour then becomes linear, allowing optical components and systems to be described by simple matrices. This leads to the techniques of Gaussian optics and paraxial ray tracing, which are used to find basic properties of optical systems, such as approximate image and object positions and magnifications. Reflections Reflections can be divided into two types: specular reflection and diffuse reflection. Specular reflection describes the gloss of surfaces such as mirrors, which reflect light in a simple, predictable way. This allows for the production of reflected images that can be associated with an actual (real) or extrapolated (virtual) location in space. Diffuse reflection describes non-glossy materials, such as paper or rock. The reflections from these surfaces can only be described statistically, with the exact distribution of the reflected light depending on the microscopic structure of the material. Many diffuse reflectors are described or can be approximated by Lambert's cosine law, which describes surfaces that have equal luminance when viewed from any angle. Glossy surfaces can give both specular and diffuse reflection. In specular reflection, the direction of the reflected ray is determined by the angle the incident ray makes with the surface normal, a line perpendicular to the surface at the point where the ray hits. The incident and reflected rays and the normal lie in a single plane, and the angle between the reflected ray and the surface normal is the same as that between the incident ray and the normal. This is known as the Law of Reflection. For flat mirrors, the law of reflection implies that images of objects are upright and the same distance behind the mirror as the objects are in front of the mirror. The image size is the same as the object size. The law also implies that mirror images are parity inverted, which we perceive as a left-right inversion. Images formed from reflection in two (or any even number of) mirrors are not parity inverted. Corner reflectors produce reflected rays that travel back in the direction from which the incident rays came. This is called retroreflection. Mirrors with curved surfaces can be modelled by ray tracing and using the law of reflection at each point on the surface. For mirrors with parabolic surfaces, parallel rays incident on the mirror produce reflected rays that converge at a common focus. Other curved surfaces may also focus light, but with aberrations due to the diverging shape causing the focus to be smeared out in space. In particular, spherical mirrors exhibit spherical aberration. Curved mirrors can form images with a magnification greater than or less than one, and the magnification can be negative, indicating that the image is inverted. An upright image formed by reflection in a mirror is always virtual, while an inverted image is real and can be projected onto a screen. Refractions Refraction occurs when light travels through an area of space that has a changing index of refraction; this principle allows for lenses and the focusing of light. The simplest case of refraction occurs when there is an interface between a uniform medium with index of refraction and another medium with index of refraction . In such situations, Snell's Law describes the resulting deflection of the light ray: where and are the angles between the normal (to the interface) and the incident and refracted waves, respectively. The index of refraction of a medium is related to the speed, , of light in that medium by where is the speed of light in vacuum. Snell's Law can be used to predict the deflection of light rays as they pass through linear media as long as the indexes of refraction and the geometry of the media are known. For example, the propagation of light through a prism results in the light ray being deflected depending on the shape and orientation of the prism. In most materials, the index of refraction varies with the frequency of the light, known as dispersion. Taking this into account, Snell's Law can be used to predict how a prism will disperse light into a spectrum. The discovery of this phenomenon when passing light through a prism is famously attributed to Isaac Newton. Some media have an index of refraction which varies gradually with position and, therefore, light rays in the medium are curved. This effect is responsible for mirages seen on hot days: a change in index of refraction air with height causes light rays to bend, creating the appearance of specular reflections in the distance (as if on the surface of a pool of water). Optical materials with varying indexes of refraction are called gradient-index (GRIN) materials. Such materials are used to make gradient-index optics. For light rays travelling from a material with a high index of refraction to a material with a low index of refraction, Snell's law predicts that there is no when is large. In this case, no transmission occurs; all the light is reflected. This phenomenon is called total internal reflection and allows for fibre optics technology. As light travels down an optical fibre, it undergoes total internal reflection allowing for essentially no light to be lost over the length of the cable. Lenses A device that produces converging or diverging light rays due to refraction is known as a lens. Lenses are characterized by their focal length: a converging lens has positive focal length, while a diverging lens has negative focal length. Smaller focal length indicates that the lens has a stronger converging or diverging effect. The focal length of a simple lens in air is given by the lensmaker's equation. Ray tracing can be used to show how images are formed by a lens. For a thin lens in air, the location of the image is given by the simple equation where is the distance from the object to the lens, is the distance from the lens to the image, and is the focal length of the lens. In the sign convention used here, the object and image distances are positive if the object and image are on opposite sides of the lens. Incoming parallel rays are focused by a converging lens onto a spot one focal length from the lens, on the far side of the lens. This is called the rear focal point of the lens. Rays from an object at a finite distance are focused further from the lens than the focal distance; the closer the object is to the lens, the further the image is from the lens. With diverging lenses, incoming parallel rays diverge after going through the lens, in such a way that they seem to have originated at a spot one focal length in front of the lens. This is the lens's front focal point. Rays from an object at a finite distance are associated with a virtual image that is closer to the lens than the focal point, and on the same side of the lens as the object. The closer the object is to the lens, the closer the virtual image is to the lens. As with mirrors, upright images produced by a single lens are virtual, while inverted images are real. Lenses suffer from aberrations that distort images. Monochromatic aberrations occur because the geometry of the lens does not perfectly direct rays from each object point to a single point on the image, while chromatic aberration occurs because the index of refraction of the lens varies with the wavelength of the light. Physical optics In physical optics, light is considered to propagate as waves. This model predicts phenomena such as interference and diffraction, which are not explained by geometric optics. The speed of light waves in air is approximately 3.0×108 m/s (exactly 299,792,458 m/s in vacuum). The wavelength of visible light waves varies between 400 and 700 nm, but the term "light" is also often applied to infrared (0.7–300 μm) and ultraviolet radiation (10–400 nm). The wave model can be used to make predictions about how an optical system will behave without requiring an explanation of what is "waving" in what medium. Until the middle of the 19th century, most physicists believed in an "ethereal" medium in which the light disturbance propagated. The existence of electromagnetic waves was predicted in 1865 by Maxwell's equations. These waves propagate at the speed of light and have varying electric and magnetic fields which are orthogonal to one another, and also to the direction of propagation of the waves. Light waves are now generally treated as electromagnetic waves except when quantum mechanical effects have to be considered. Modelling and design of optical systems using physical optics Many simplified approximations are available for analysing and designing optical systems. Most of these use a single scalar quantity to represent the electric field of the light wave, rather than using a vector model with orthogonal electric and magnetic vectors. The Huygens–Fresnel equation is one such model. This was derived empirically by Fresnel in 1815, based on Huygens' hypothesis that each point on a wavefront generates a secondary spherical wavefront, which Fresnel combined with the principle of superposition of waves. The Kirchhoff diffraction equation, which is derived using Maxwell's equations, puts the Huygens-Fresnel equation on a firmer physical foundation. Examples of the application of Huygens–Fresnel principle can be found in the articles on diffraction and Fraunhofer diffraction. More rigorous models, involving the modelling of both electric and magnetic fields of the light wave, are required when dealing with materials whose electric and magnetic properties affect the interaction of light with the material. For instance, the behaviour of a light wave interacting with a metal surface is quite different from what happens when it interacts with a dielectric material. A vector model must also be used to model polarised light. Numerical modeling techniques such as the finite element method, the boundary element method and the transmission-line matrix method can be used to model the propagation of light in systems which cannot be solved analytically. Such models are computationally demanding and are normally only used to solve small-scale problems that require accuracy beyond that which can be achieved with analytical solutions. All of the results from geometrical optics can be recovered using the techniques of Fourier optics which apply many of the same mathematical and analytical techniques used in acoustic engineering and signal processing. Gaussian beam propagation is a simple paraxial physical optics model for the propagation of coherent radiation such as laser beams. This technique partially accounts for diffraction, allowing accurate calculations of the rate at which a laser beam expands with distance, and the minimum size to which the beam can be focused. Gaussian beam propagation thus bridges the gap between geometric and physical optics. Superposition and interference In the absence of nonlinear effects, the superposition principle can be used to predict the shape of interacting waveforms through the simple addition of the disturbances. This interaction of waves to produce a resulting pattern is generally termed "interference" and can result in a variety of outcomes. If two waves of the same wavelength and frequency are in phase, both the wave crests and wave troughs align. This results in constructive interference and an increase in the amplitude of the wave, which for light is associated with a brightening of the waveform in that location. Alternatively, if the two waves of the same wavelength and frequency are out of phase, then the wave crests will align with wave troughs and vice versa. This results in destructive interference and a decrease in the amplitude of the wave, which for light is associated with a dimming of the waveform at that location. See below for an illustration of this effect. Since the Huygens–Fresnel principle states that every point of a wavefront is associated with the production of a new disturbance, it is possible for a wavefront to interfere with itself constructively or destructively at different locations producing bright and dark fringes in regular and predictable patterns. Interferometry is the science of measuring these patterns, usually as a means of making precise determinations of distances or angular resolutions. The Michelson interferometer was a famous instrument which used interference effects to accurately measure the speed of light. The appearance of thin films and coatings is directly affected by interference effects. Antireflective coatings use destructive interference to reduce the reflectivity of the surfaces they coat, and can be used to minimise glare and unwanted reflections. The simplest case is a single layer with a thickness of one-fourth the wavelength of incident light. The reflected wave from the top of the film and the reflected wave from the film/material interface are then exactly 180° out of phase, causing destructive interference. The waves are only exactly out of phase for one wavelength, which would typically be chosen to be near the centre of the visible spectrum, around 550 nm. More complex designs using multiple layers can achieve low reflectivity over a broad band, or extremely low reflectivity at a single wavelength. Constructive interference in thin films can create a strong reflection of light in a range of wavelengths, which can be narrow or broad depending on the design of the coating. These films are used to make dielectric mirrors, interference filters, heat reflectors, and filters for colour separation in colour television cameras. This interference effect is also what causes the colourful rainbow patterns seen in oil slicks. Diffraction and optical resolution Diffraction is the process by which light interference is most commonly observed. The effect was first described in 1665 by Francesco Maria Grimaldi, who also coined the term from the Latin . Later that century, Robert Hooke and Isaac Newton also described phenomena now known to be diffraction in Newton's rings while James Gregory recorded his observations of diffraction patterns from bird feathers. The first physical optics model of diffraction that relied on the Huygens–Fresnel principle was developed in 1803 by Thomas Young in his interference experiments with the interference patterns of two closely spaced slits. Young showed that his results could only be explained if the two slits acted as two unique sources of waves rather than corpuscles. In 1815 and 1818, Augustin-Jean Fresnel firmly established the mathematics of how wave interference can account for diffraction. The simplest physical models of diffraction use equations that describe the angular separation of light and dark fringes due to light of a particular wavelength (). In general, the equation takes the form where is the separation between two wavefront sources (in the case of Young's experiments, it was two slits), is the angular separation between the central fringe and the order fringe, where the central maximum is . This equation is modified slightly to take into account a variety of situations such as diffraction through a single gap, diffraction through multiple slits, or diffraction through a diffraction grating that contains a large number of slits at equal spacing. More complicated models of diffraction require working with the mathematics of Fresnel or Fraunhofer diffraction. X-ray diffraction makes use of the fact that atoms in a crystal have regular spacing at distances that are on the order of one angstrom. To see diffraction patterns, x-rays with similar wavelengths to that spacing are passed through the crystal. Since crystals are three-dimensional objects rather than two-dimensional gratings, the associated diffraction pattern varies in two directions according to Bragg reflection, with the associated bright spots occurring in unique patterns and being twice the spacing between atoms. Diffraction effects limit the ability of an optical detector to optically resolve separate light sources. In general, light that is passing through an aperture will experience diffraction and the best images that can be created (as described in diffraction-limited optics) appear as a central spot with surrounding bright rings, separated by dark nulls; this pattern is known as an Airy pattern, and the central bright lobe as an Airy disk. The size of such a disk is given by where is the angular resolution, is the wavelength of the light, and is the diameter of the lens aperture. If the angular separation of the two points is significantly less than the Airy disk angular radius, then the two points cannot be resolved in the image, but if their angular separation is much greater than this, distinct images of the two points are formed and they can therefore be resolved. Rayleigh defined the somewhat arbitrary "Rayleigh criterion" that two points whose angular separation is equal to the Airy disk radius (measured to first null, that is, to the first place where no light is seen) can be considered to be resolved. It can be seen that the greater the diameter of the lens or its aperture, the finer the resolution. Interferometry, with its ability to mimic extremely large baseline apertures, allows for the greatest angular resolution possible. For astronomical imaging, the atmosphere prevents optimal resolution from being achieved in the visible spectrum due to the atmospheric scattering and dispersion which cause stars to twinkle. Astronomers refer to this effect as the quality of astronomical seeing. Techniques known as adaptive optics have been used to eliminate the atmospheric disruption of images and achieve results that approach the diffraction limit. Dispersion and scattering Refractive processes take place in the physical optics limit, where the wavelength of light is similar to other distances, as a kind of scattering. The simplest type of scattering is Thomson scattering which occurs when electromagnetic waves are deflected by single particles. In the limit of Thomson scattering, in which the wavelike nature of light is evident, light is dispersed independent of the frequency, in contrast to Compton scattering which is frequency-dependent and strictly a quantum mechanical process, involving the nature of light as particles. In a statistical sense, elastic scattering of light by numerous particles much smaller than the wavelength of the light is a process known as Rayleigh scattering while the similar process for scattering by particles that are similar or larger in wavelength is known as Mie scattering with the Tyndall effect being a commonly observed result. A small proportion of light scattering from atoms or molecules may undergo Raman scattering, wherein the frequency changes due to excitation of the atoms and molecules. Brillouin scattering occurs when the frequency of light changes due to local changes with time and movements of a dense material. Dispersion occurs when different frequencies of light have different phase velocities, due either to material properties (material dispersion) or to the geometry of an optical waveguide (waveguide dispersion). The most familiar form of dispersion is a decrease in index of refraction with increasing wavelength, which is seen in most transparent materials. This is called "normal dispersion". It occurs in all dielectric materials, in wavelength ranges where the material does not absorb light. In wavelength ranges where a medium has significant absorption, the index of refraction can increase with wavelength. This is called "anomalous dispersion". The separation of colours by a prism is an example of normal dispersion. At the surfaces of the prism, Snell's law predicts that light incident at an angle to the normal will be refracted at an angle . Thus, blue light, with its higher refractive index, is bent more strongly than red light, resulting in the well-known rainbow pattern. Material dispersion is often characterised by the Abbe number, which gives a simple measure of dispersion based on the index of refraction at three specific wavelengths. Waveguide dispersion is dependent on the propagation constant. Both kinds of dispersion cause changes in the group characteristics of the wave, the features of the wave packet that change with the same frequency as the amplitude of the electromagnetic wave. "Group velocity dispersion" manifests as a spreading-out of the signal "envelope" of the radiation and can be quantified with a group dispersion delay parameter: where is the group velocity. For a uniform medium, the group velocity is where is the index of refraction and is the speed of light in a vacuum. This gives a simpler form for the dispersion delay parameter: If is less than zero, the medium is said to have positive dispersion or normal dispersion. If is greater than zero, the medium has negative dispersion. If a light pulse is propagated through a normally dispersive medium, the result is the higher frequency components slow down more than the lower frequency components. The pulse therefore becomes positively chirped, or up-chirped, increasing in frequency with time. This causes the spectrum coming out of a prism to appear with red light the least refracted and blue/violet light the most refracted. Conversely, if a pulse travels through an anomalously (negatively) dispersive medium, high-frequency components travel faster than the lower ones, and the pulse becomes negatively chirped, or down-chirped, decreasing in frequency with time. The result of group velocity dispersion, whether negative or positive, is ultimately temporal spreading of the pulse. This makes dispersion management extremely important in optical communications systems based on optical fibres, since if dispersion is too high, a group of pulses representing information will each spread in time and merge, making it impossible to extract the signal. Polarisation Polarisation is a general property of waves that describes the orientation of their oscillations. For transverse waves such as many electromagnetic waves, it describes the orientation of the oscillations in the plane perpendicular to the wave's direction of travel. The oscillations may be oriented in a single direction (linear polarisation), or the oscillation direction may rotate as the wave travels (circular or elliptical polarisation). Circularly polarised waves can rotate rightward or leftward in the direction of travel, and which of those two rotations is present in a wave is called the wave's chirality. The typical way to consider polarisation is to keep track of the orientation of the electric field vector as the electromagnetic wave propagates. The electric field vector of a plane wave may be arbitrarily divided into two perpendicular components labeled and (with indicating the direction of travel). The shape traced out in the x-y plane by the electric field vector is a Lissajous figure that describes the polarisation state. The following figures show some examples of the evolution of the electric field vector (blue), with time (the vertical axes), at a particular point in space, along with its and components (red/left and green/right), and the path traced by the vector in the plane (purple): The same evolution would occur when looking at the electric field at a particular time while evolving the point in space, along the direction opposite to propagation. In the leftmost figure above, the and components of the light wave are in phase. In this case, the ratio of their strengths is constant, so the direction of the electric vector (the vector sum of these two components) is constant. Since the tip of the vector traces out a single line in the plane, this special case is called linear polarisation. The direction of this line depends on the relative amplitudes of the two components. In the middle figure, the two orthogonal components have the same amplitudes and are 90° out of phase. In this case, one component is zero when the other component is at maximum or minimum amplitude. There are two possible phase relationships that satisfy this requirement: the component can be 90° ahead of the component or it can be 90° behind the component. In this special case, the electric vector traces out a circle in the plane, so this polarisation is called circular polarisation. The rotation direction in the circle depends on which of the two-phase relationships exists and corresponds to right-hand circular polarisation and left-hand circular polarisation. In all other cases, where the two components either do not have the same amplitudes and/or their phase difference is neither zero nor a multiple of 90°, the polarisation is called elliptical polarisation because the electric vector traces out an ellipse in the plane (the polarisation ellipse). This is shown in the above figure on the right. Detailed mathematics of polarisation is done using Jones calculus and is characterised by the Stokes parameters. Changing polarisation Media that have different indexes of refraction for different polarisation modes are called birefringent. Well known manifestations of this effect appear in optical wave plates/retarders (linear modes) and in Faraday rotation/optical rotation (circular modes). If the path length in the birefringent medium is sufficient, plane waves will exit the material with a significantly different propagation direction, due to refraction. For example, this is the case with macroscopic crystals of calcite, which present the viewer with two offset, orthogonally polarised images of whatever is viewed through them. It was this effect that provided the first discovery of polarisation, by Erasmus Bartholinus in 1669. In addition, the phase shift, and thus the change in polarisation state, is usually frequency dependent, which, in combination with dichroism, often gives rise to bright colours and rainbow-like effects. In mineralogy, such properties, known as pleochroism, are frequently exploited for the purpose of identifying minerals using polarisation microscopes. Additionally, many plastics that are not normally birefringent will become so when subject to mechanical stress, a phenomenon which is the basis of photoelasticity. Non-birefringent methods, to rotate the linear polarisation of light beams, include the use of prismatic polarisation rotators which use total internal reflection in a prism set designed for efficient collinear transmission. Media that reduce the amplitude of certain polarisation modes are called dichroic, with devices that block nearly all of the radiation in one mode known as polarising filters or simply "polarisers". Malus' law, which is named after Étienne-Louis Malus, says that when a perfect polariser is placed in a linear polarised beam of light, the intensity, , of the light that passes through is given by where is the initial intensity, and is the angle between the light's initial polarisation direction and the axis of the polariser. A beam of unpolarised light can be thought of as containing a uniform mixture of linear polarisations at all possible angles. Since the average value of is 1/2, the transmission coefficient becomes In practice, some light is lost in the polariser and the actual transmission of unpolarised light will be somewhat lower than this, around 38% for Polaroid-type polarisers but considerably higher (>49.9%) for some birefringent prism types. In addition to birefringence and dichroism in extended media, polarisation effects can also occur at the (reflective) interface between two materials of different refractive index. These effects are treated by the Fresnel equations. Part of the wave is transmitted and part is reflected, with the ratio depending on the angle of incidence and the angle of refraction. In this way, physical optics recovers Brewster's angle. When light reflects from a thin film on a surface, interference between the reflections from the film's surfaces can produce polarisation in the reflected and transmitted light. Natural light Most sources of electromagnetic radiation contain a large number of atoms or molecules that emit light. The orientation of the electric fields produced by these emitters may not be correlated, in which case the light is said to be unpolarised. If there is partial correlation between the emitters, the light is partially polarised. If the polarisation is consistent across the spectrum of the source, partially polarised light can be described as a superposition of a completely unpolarised component, and a completely polarised one. One may then describe the light in terms of the degree of polarisation, and the parameters of the polarisation ellipse. Light reflected by shiny transparent materials is partly or fully polarised, except when the light is normal (perpendicular) to the surface. It was this effect that allowed the mathematician Étienne-Louis Malus to make the measurements that allowed for his development of the first mathematical models for polarised light. Polarisation occurs when light is scattered in the atmosphere. The scattered light produces the brightness and colour in clear skies. This partial polarisation of scattered light can be taken advantage of using polarising filters to darken the sky in photographs. Optical polarisation is principally of importance in chemistry due to circular dichroism and optical rotation (circular birefringence) exhibited by optically active (chiral) molecules. Modern optics Modern optics encompasses the areas of optical science and engineering that became popular in the 20th century. These areas of optical science typically relate to the electromagnetic or quantum properties of light but do include other topics. A major subfield of modern optics, quantum optics, deals with specifically quantum mechanical properties of light. Quantum optics is not just theoretical; some modern devices, such as lasers, have principles of operation that depend on quantum mechanics. Light detectors, such as photomultipliers and channeltrons, respond to individual photons. Electronic image sensors, such as CCDs, exhibit shot noise corresponding to the statistics of individual photon events. Light-emitting diodes and photovoltaic cells, too, cannot be understood without quantum mechanics. In the study of these devices, quantum optics often overlaps with quantum electronics. Specialty areas of optics research include the study of how light interacts with specific materials as in crystal optics and metamaterials. Other research focuses on the phenomenology of electromagnetic waves as in singular optics, non-imaging optics, non-linear optics, statistical optics, and radiometry. Additionally, computer engineers have taken an interest in integrated optics, machine vision, and photonic computing as possible components of the "next generation" of computers. Today, the pure science of optics is called optical science or optical physics to distinguish it from applied optical sciences, which are referred to as optical engineering. Prominent subfields of optical engineering include illumination engineering, photonics, and optoelectronics with practical applications like lens design, fabrication and testing of optical components, and image processing. Some of these fields overlap, with nebulous boundaries between the subjects' terms that mean slightly different things in different parts of the world and in different areas of industry. A professional community of researchers in nonlinear optics has developed in the last several decades due to advances in laser technology. Lasers A laser is a device that emits light, a kind of electromagnetic radiation, through a process called stimulated emission. The term laser is an acronym for . Laser light is usually spatially coherent, which means that the light either is emitted in a narrow, low-divergence beam, or can be converted into one with the help of optical components such as lenses. Because the microwave equivalent of the laser, the maser, was developed first, devices that emit microwave and radio frequencies are usually called masers. The first working laser was demonstrated on 16 May 1960 by Theodore Maiman at Hughes Research Laboratories. When first invented, they were called "a solution looking for a problem". Since then, lasers have become a multibillion-dollar industry, finding utility in thousands of highly varied applications. The first application of lasers visible in the daily lives of the general population was the supermarket barcode scanner, introduced in 1974. The laserdisc player, introduced in 1978, was the first successful consumer product to include a laser, but the compact disc player was the first laser-equipped device to become truly common in consumers' homes, beginning in 1982. These optical storage devices use a semiconductor laser less than a millimetre wide to scan the surface of the disc for data retrieval. Fibre-optic communication relies on lasers to transmit large amounts of information at the speed of light. Other common applications of lasers include laser printers and laser pointers. Lasers are used in medicine in areas such as bloodless surgery, laser eye surgery, and laser capture microdissection and in military applications such as missile defence systems, electro-optical countermeasures (EOCM), and lidar. Lasers are also used in holograms, bubblegrams, laser light shows, and laser hair removal. Kapitsa–Dirac effect The Kapitsa–Dirac effect causes beams of particles to diffract as the result of meeting a standing wave of light. Light can be used to position matter using various phenomena (see optical tweezers). Applications Optics is part of everyday life. The ubiquity of visual systems in biology indicates the central role optics plays as the science of one of the five senses. Many people benefit from eyeglasses or contact lenses, and optics are integral to the functioning of many consumer goods including cameras. Rainbows and mirages are examples of optical phenomena. Optical communication provides the backbone for both the Internet and modern telephony. Human eye The human eye functions by focusing light onto a layer of photoreceptor cells called the retina, which forms the inner lining of the back of the eye. The focusing is accomplished by a series of transparent media. Light entering the eye passes first through the cornea, which provides much of the eye's optical power. The light then continues through the fluid just behind the cornea—the anterior chamber, then passes through the pupil. The light then passes through the lens, which focuses the light further and allows adjustment of focus. The light then passes through the main body of fluid in the eye—the vitreous humour, and reaches the retina. The cells in the retina line the back of the eye, except for where the optic nerve exits; this results in a blind spot. There are two types of photoreceptor cells, rods and cones, which are sensitive to different aspects of light. Rod cells are sensitive to the intensity of light over a wide frequency range, thus are responsible for black-and-white vision. Rod cells are not present on the fovea, the area of the retina responsible for central vision, and are not as responsive as cone cells to spatial and temporal changes in light. There are, however, twenty times more rod cells than cone cells in the retina because the rod cells are present across a wider area. Because of their wider distribution, rods are responsible for peripheral vision. In contrast, cone cells are less sensitive to the overall intensity of light, but come in three varieties that are sensitive to different frequency-ranges and thus are used in the perception of colour and photopic vision. Cone cells are highly concentrated in the fovea and have a high visual acuity meaning that they are better at spatial resolution than rod cells. Since cone cells are not as sensitive to dim light as rod cells, most night vision is limited to rod cells. Likewise, since cone cells are in the fovea, central vision (including the vision needed to do most reading, fine detail work such as sewing, or careful examination of objects) is done by cone cells. Ciliary muscles around the lens allow the eye's focus to be adjusted. This process is known as accommodation. The near point and far point define the nearest and farthest distances from the eye at which an object can be brought into sharp focus. For a person with normal vision, the far point is located at infinity. The near point's location depends on how much the muscles can increase the curvature of the lens, and how inflexible the lens has become with age. Optometrists, ophthalmologists, and opticians usually consider an appropriate near point to be closer than normal reading distance—approximately 25 cm. Defects in vision can be explained using optical principles. As people age, the lens becomes less flexible and the near point recedes from the eye, a condition known as presbyopia. Similarly, people suffering from hyperopia cannot decrease the focal length of their lens enough to allow for nearby objects to be imaged on their retina. Conversely, people who cannot increase the focal length of their lens enough to allow for distant objects to be imaged on the retina suffer from myopia and have a far point that is considerably closer than infinity. A condition known as astigmatism results when the cornea is not spherical but instead is more curved in one direction. This causes horizontally extended objects to be focused on different parts of the retina than vertically extended objects, and results in distorted images. All of these conditions can be corrected using corrective lenses. For presbyopia and hyperopia, a converging lens provides the extra curvature necessary to bring the near point closer to the eye while for myopia a diverging lens provides the curvature necessary to send the far point to infinity. Astigmatism is corrected with a cylindrical surface lens that curves more strongly in one direction than in another, compensating for the non-uniformity of the cornea. The optical power of corrective lenses is measured in diopters, a value equal to the reciprocal of the focal length measured in metres; with a positive focal length corresponding to a converging lens and a negative focal length corresponding to a diverging lens. For lenses that correct for astigmatism as well, three numbers are given: one for the spherical power, one for the cylindrical power, and one for the angle of orientation of the astigmatism. Visual effects Optical illusions (also called visual illusions) are characterized by visually perceived images that differ from objective reality. The information gathered by the eye is processed in the brain to give a percept that differs from the object being imaged. Optical illusions can be the result of a variety of phenomena including physical effects that create images that are different from the objects that make them, the physiological effects on the eyes and brain of excessive stimulation (e.g. brightness, tilt, colour, movement), and cognitive illusions where the eye and brain make unconscious inferences. Cognitive illusions include some which result from the unconscious misapplication of certain optical principles. For example, the Ames room, Hering, Müller-Lyer, Orbison, Ponzo, Sander, and Wundt illusions all rely on the suggestion of the appearance of distance by using converging and diverging lines, in the same way that parallel light rays (or indeed any set of parallel lines) appear to converge at a vanishing point at infinity in two-dimensionally rendered images with artistic perspective. This suggestion is also responsible for the famous moon illusion where the moon, despite having essentially the same angular size, appears much larger near the horizon than it does at zenith. This illusion so confounded Ptolemy that he incorrectly attributed it to atmospheric refraction when he described it in his treatise, Optics. Another type of optical illusion exploits broken patterns to trick the mind into perceiving symmetries or asymmetries that are not present. Examples include the café wall, Ehrenstein, Fraser spiral, Poggendorff, and Zöllner illusions. Related, but not strictly illusions, are patterns that occur due to the superimposition of periodic structures. For example, transparent tissues with a grid structure produce shapes known as moiré patterns, while the superimposition of periodic transparent patterns comprising parallel opaque lines or curves produces line moiré patterns. Optical instruments Single lenses have a variety of applications including photographic lenses, corrective lenses, and magnifying glasses while single mirrors are used in parabolic reflectors and rear-view mirrors. Combining a number of mirrors, prisms, and lenses produces compound optical instruments which have practical uses. For example, a periscope is simply two plane mirrors aligned to allow for viewing around obstructions. The most famous compound optical instruments in science are the microscope and the telescope which were both invented by the Dutch in the late 16th century. Microscopes were first developed with just two lenses: an objective lens and an eyepiece. The objective lens is essentially a magnifying glass and was designed with a very small focal length while the eyepiece generally has a longer focal length. This has the effect of producing magnified images of close objects. Generally, an additional source of illumination is used since magnified images are dimmer due to the conservation of energy and the spreading of light rays over a larger surface area. Modern microscopes, known as compound microscopes have many lenses in them (typically four) to optimize the functionality and enhance image stability. A slightly different variety of microscope, the comparison microscope, looks at side-by-side images to produce a stereoscopic binocular view that appears three dimensional when used by humans. The first telescopes, called refracting telescopes, were also developed with a single objective and eyepiece lens. In contrast to the microscope, the objective lens of the telescope was designed with a large focal length to avoid optical aberrations. The objective focuses an image of a distant object at its focal point which is adjusted to be at the focal point of an eyepiece of a much smaller focal length. The main goal of a telescope is not necessarily magnification, but rather the collection of light which is determined by the physical size of the objective lens. Thus, telescopes are normally indicated by the diameters of their objectives rather than by the magnification which can be changed by switching eyepieces. Because the magnification of a telescope is equal to the focal length of the objective divided by the focal length of the eyepiece, smaller focal-length eyepieces cause greater magnification. Since crafting large lenses is much more difficult than crafting large mirrors, most modern telescopes are reflecting telescopes, that is, telescopes that use a primary mirror rather than an objective lens. The same general optical considerations apply to reflecting telescopes that applied to refracting telescopes, namely, the larger the primary mirror, the more light collected, and the magnification is still equal to the focal length of the primary mirror divided by the focal length of the eyepiece. Professional telescopes generally do not have eyepieces and instead place an instrument (often a charge-coupled device) at the focal point instead. Photography The optics of photography involves both lenses and the medium in which the electromagnetic radiation is recorded, whether it be a plate, film, or charge-coupled device. Photographers must consider the reciprocity of the camera and the shot which is summarized by the relation Exposure ∝ ApertureArea × ExposureTime × SceneLuminance In other words, the smaller the aperture (giving greater depth of focus), the less light coming in, so the length of time has to be increased (leading to possible blurriness if motion occurs). An example of the use of the law of reciprocity is the Sunny 16 rule which gives a rough estimate for the settings needed to estimate the proper exposure in daylight. A camera's aperture is measured by a unitless number called the f-number or f-stop, #, often notated as , and given by where is the focal length, and is the diameter of the entrance pupil. By convention, "#" is treated as a single symbol, and specific values of # are written by replacing the number sign with the value. The two ways to increase the f-stop are to either decrease the diameter of the entrance pupil or change to a longer focal length (in the case of a zoom lens, this can be done by simply adjusting the lens). Higher f-numbers also have a larger depth of field due to the lens approaching the limit of a pinhole camera which is able to focus all images perfectly, regardless of distance, but requires very long exposure times. The field of view that the lens will provide changes with the focal length of the lens. There are three basic classifications based on the relationship to the diagonal size of the film or sensor size of the camera to the focal length of the lens: Normal lens: angle of view of about 50° (called normal because this angle considered roughly equivalent to human vision) and a focal length approximately equal to the diagonal of the film or sensor. Wide-angle lens: angle of view wider than 60° and focal length shorter than a normal lens. Long focus lens: angle of view narrower than a normal lens. This is any lens with a focal length longer than the diagonal measure of the film or sensor. The most common type of long focus lens is the telephoto lens, a design that uses a special telephoto group to be physically shorter than its focal length. Modern zoom lenses may have some or all of these attributes. The absolute value for the exposure time required depends on how sensitive to light the medium being used is (measured by the film speed, or, for digital media, by the quantum efficiency). Early photography used media that had very low light sensitivity, and so exposure times had to be long even for very bright shots. As technology has improved, so has the sensitivity through film cameras and digital cameras. Other results from physical and geometrical optics apply to camera optics. For example, the maximum resolution capability of a particular camera set-up is determined by the diffraction limit associated with the pupil size and given, roughly, by the Rayleigh criterion. Atmospheric optics The unique optical properties of the atmosphere cause a wide range of spectacular optical phenomena. The blue colour of the sky is a direct result of Rayleigh scattering which redirects higher frequency (blue) sunlight back into the field of view of the observer. Because blue light is scattered more easily than red light, the sun takes on a reddish hue when it is observed through a thick atmosphere, as during a sunrise or sunset. Additional particulate matter in the sky can scatter different colours at different angles creating colourful glowing skies at dusk and dawn. Scattering off of ice crystals and other particles in the atmosphere are responsible for halos, afterglows, coronas, rays of sunlight, and sun dogs. The variation in these kinds of phenomena is due to different particle sizes and geometries. Mirages are optical phenomena in which light rays are bent due to thermal variations in the refraction index of air, producing displaced or heavily distorted images of distant objects. Other dramatic optical phenomena associated with this include the Novaya Zemlya effect where the sun appears to rise earlier than predicted with a distorted shape. A spectacular form of refraction occurs with a temperature inversion called the Fata Morgana where objects on the horizon or even beyond the horizon, such as islands, cliffs, ships or icebergs, appear elongated and elevated, like "fairy tale castles". Rainbows are the result of a combination of internal reflection and dispersive refraction of light in raindrops. A single reflection off the backs of an array of raindrops produces a rainbow with an angular size on the sky that ranges from 40° to 42° with red on the outside. Double rainbows are produced by two internal reflections with angular size of 50.5° to 54° with violet on the outside. Because rainbows are seen with the sun 180° away from the centre of the rainbow, rainbows are more prominent the closer the sun is to the horizon.
Physical sciences
Physics
null
22497
https://en.wikipedia.org/wiki/OpenGL
OpenGL
OpenGL (Open Graphics Library) is a cross-language, cross-platform application programming interface (API) for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit (GPU), to achieve hardware-accelerated rendering. Silicon Graphics, Inc. (SGI) began developing OpenGL in 1991 and released it on June 30, 1992. It is used for a variety of applications, including computer-aided design (CAD), video games, scientific visualization, virtual reality, and flight simulation. Since 2006, OpenGL has been managed by the non-profit technology consortium Khronos Group. Design The OpenGL specification describes an abstract application programming interface (API) for drawing 2D and 3D graphics. It is designed to be implemented mostly or entirely using hardware acceleration such as a GPU, although it is possible for the API to be implemented entirely in software running on a CPU. The API is defined as a set of functions which may be called by the client program, alongside a set of named integer constants (for example, the constant GL_TEXTURE_2D, which corresponds to the decimal number 3553). Although the function definitions are superficially similar to those of the programming language C, they are language-independent. As such, OpenGL has many language bindings, some of the most noteworthy being the JavaScript binding WebGL (API, based on OpenGL ES 2.0, for 3D rendering from within a web browser); the C bindings WGL, GLX and CGL; the C binding provided by iOS; and the Java and C bindings provided by Android. In addition to being language-independent, OpenGL is also cross-platform. The specification says nothing on the subject of obtaining and managing an OpenGL context, leaving this as a detail of the underlying windowing system. For the same reason, OpenGL is purely concerned with rendering, providing no APIs related to input, audio, or windowing. Development OpenGL is no longer in active development, whereas between 2001 and 2014, OpenGL specification was updated mostly on a yearly basis, with two releases (3.1 and 3.2) taking place in 2009 and three (3.3, 4.0 and 4.1) in 2010. The latest OpenGL specification 4.6 was released in 2017 after a three-year break, and was limited to inclusion of eleven existing ARB and EXT extensions into the core profile. Active development of OpenGL was dropped in favor of the Vulkan API, released in 2016, and codenamed glNext during initial development. In 2017, Khronos Group announced that OpenGL ES would not have new versions and has since concentrated on development of Vulkan and other technologies. As a result, certain capabilities offered by modern GPUs, e.g. ray tracing, are not supported by the OpenGL standard. However, support for newer features might be provided through the vendor-specific OpenGL extensions. New versions of the OpenGL specifications are released by the Khronos Group, each of which extends the API to support various new features. The details of each version are decided by consensus between the Group's members, including graphics card manufacturers, operating system designers, and general technology companies such as Mozilla and Google. In addition to the features required by the core API, graphics processing unit (GPU) vendors may provide additional functionality in the form of extensions. Extensions may introduce new functions and new constants, and may relax or remove restrictions on existing OpenGL functions. Vendors can use extensions to expose custom APIs without needing support from other vendors or the Khronos Group as a whole, which greatly increases the flexibility of OpenGL. All extensions are collected in, and defined by, the OpenGL Registry. Each extension is associated with a short identifier, based on the name of the company which developed it. For example, Nvidia's identifier is NV, which is part of the extension name GL_NV_half_float, the constant GL_HALF_FLOAT_NV, and the function glVertex2hNV(). If multiple vendors agree to implement the same functionality using the same API, a shared extension may be released, using the identifier EXT. In such cases, it could also happen that the Khronos Group's Architecture Review Board gives the extension their explicit approval, in which case the identifier ARB is used. The features introduced by each new version of OpenGL are typically formed from the combined features of several widely implemented extensions, especially extensions of type ARB or EXT. Documentation The OpenGL Architecture Review Board released a series of manuals along with the specification which have been updated to track changes in the API. These are commonly referred to by the colors of their covers: The Red Book OpenGL Programming Guide, 9th Edition. The Official Guide to Learning OpenGL, Version 4.5 with SPIR-V The Orange Book OpenGL Shading Language, 3rd edition. A tutorial and reference book for GLSL. Historic books (pre-OpenGL 2.0): The Green Book OpenGL Programming for the X Window System. A book about X11 interfacing and OpenGL Utility Toolkit (GLUT). The Blue Book OpenGL Reference manual, 4th edition. Essentially a hard-copy printout of the Unix manual (man) pages for OpenGL. Includes a poster-sized fold-out diagram showing the structure of an idealised OpenGL implementation. The Alpha Book (white cover) OpenGL Programming for Windows 95 and Windows NT. A book about interfacing OpenGL with Microsoft Windows. OpenGL's documentation is also accessible via its official webpage. Associated libraries The earliest versions of OpenGL were released with a companion library called the OpenGL Utility Library (GLU). It provided simple, useful features which were unlikely to be supported in contemporary hardware, such as tessellating, and generating mipmaps and primitive shapes. The GLU specification was last updated in 1998 and depends on OpenGL features which are now deprecated. Context and window toolkits Given that creating an OpenGL context is quite a complex process, and given that it varies between operating systems, automatic OpenGL context creation has become a common feature of several game-development and user-interface libraries, including SDL, Allegro, SFML, FLTK, and Qt. A few libraries have been designed solely to produce an OpenGL-capable window. The first such library was OpenGL Utility Toolkit (GLUT), later superseded by freeglut. GLFW is a newer alternative. These toolkits are designed to create and manage OpenGL windows, and manage input, but little beyond that. GLFW – A cross-platform windowing and keyboard-mouse-joystick handler; is more game-oriented freeglut – A cross-platform windowing and keyboard-mouse handler; its API is a superset of the GLUT API, and it is more stable and up to date than GLUT OpenGL Utility Toolkit (GLUT) – An old windowing handler, no longer maintained. Several "multimedia libraries" can create OpenGL windows, in addition to input, sound and other tasks useful for game-like applications Allegro 5 – A cross-platform multimedia library with a C API focused on game development Simple DirectMedia Layer (SDL) – A cross-platform multimedia library with a C API SFML – A cross-platform multimedia library with a C++ API and multiple other bindings to languages such as C#, Java, Haskell, and Go Widget toolkits FLTK – A small cross-platform C++ widget library Qt – A cross-platform C++ widget toolkit. It provides many OpenGL helper objects, which even abstract away the difference between desktop GL and OpenGL ES wxWidgets – A cross-platform C++ widget toolkit Extension loading libraries Given the high workload involved in identifying and loading OpenGL extensions, a few libraries have been designed which load all available extensions and functions automatically. Examples include OpenGL Easy Extension library (GLEE), OpenGL Extension Wrangler Library (GLEW) and glbinding. Extensions are also loaded automatically by most language bindings, such as Java OpenGL, PyOpenGL and WebGL. Implementations Mesa 3D is an open-source implementation of OpenGL. It can do pure software rendering, and it may also use hardware acceleration on BSD, Linux, and other platforms by taking advantage of the Direct Rendering Infrastructure. As of version 20.0, it implements version 4.6 of the OpenGL standard. History In the 1980s, developing software that could function with a wide range of graphics hardware was a challenge without a cross-platform library. Software developers wrote custom interfaces and drivers for each piece of hardware. This was expensive and resulted in multiplication of effort. By the early 1990s, Silicon Graphics (SGI) was a leader in 3D graphics for workstations. Their IRIS GL API became the industry standard, as IRIS GL was considered easier to use, and it supported immediate mode rendering, therefore being faster than competitors like PHIGS. SGI's competitors (including Sun Microsystems, Hewlett-Packard and IBM) were also able to bring to market 3D hardware supported by extensions made to the PHIGS standard, which pressured SGI to open source a version of IRIS GL as a public standard called OpenGL. However, SGI had many customers for whom the change from IRIS GL to OpenGL would demand significant investment. Moreover, IRIS GL had API functions that were irrelevant to 3D graphics. For example, it included a windowing, keyboard and mouse API, in part because it was developed before the X Window System and Sun's NeWS. IRIS GL libraries also were unsuitable for opening due to licensing and patent issues. These factors required SGI to continue to support the advanced and proprietary Iris Inventor and Iris Performer programming APIs while market support for OpenGL matured. One of the restrictions of IRIS GL was that it only provided access to features supported by the underlying hardware. If the graphics hardware did not support a feature natively, then the application could not use it. OpenGL overcame this problem by providing software implementations of features unsupported by hardware, allowing applications to use advanced graphics on relatively low-powered systems. OpenGL standardized access to hardware, pushed the development responsibility of hardware interface programs (device drivers) to hardware manufacturers, and delegated windowing functions to the underlying operating system. With so many different kinds of graphics hardware, getting them all to speak the same language in this way had a remarkable impact by giving software developers a higher-level platform for 3D-software development. In 1992, SGI led the creation of the OpenGL Architecture Review Board (OpenGL ARB), the group of companies that would maintain and expand the OpenGL specification in the future. Two years later, they also played with the idea of releasing something called "OpenGL++" which included elements such as a scene-graph API (presumably based on their Performer technology). The specification was circulated among a few interested parties – but never turned into a product. Released in 1996, Microsoft's Direct3D eventually became the main competitor of OpenGL. Over 50 game developers signed an open letter to Microsoft, released on June 12, 1997, calling on the company to actively support OpenGL. On December 17, 1997, Microsoft and SGI initiated the Fahrenheit project, which was a joint effort with the goal of unifying the OpenGL and Direct3D interfaces (and adding a scene-graph API too). In 1998, Hewlett-Packard joined the project. It initially showed some promise of bringing order to the world of interactive 3D computer graphics APIs, but on account of financial constraints at SGI, strategic reasons at Microsoft, and a general lack of industry support, it was abandoned in 1999. In July 2006, the OpenGL Architecture Review Board voted to transfer control of the OpenGL API standard to the Khronos Group. Industry support Despite the emergence of newer graphics APIs like its successor Vulkan or Metal, OpenGL continues to be a widely used standard. This continued relevance is supported by several factors: ongoing development with new extensions and driver optimizations, its cross-platform compatibility, and the availability of compatibility layers like ANGLE and Zink. These layers allow OpenGL to run efficiently on top of Vulkan and Metal, offering a pathway for continued use or gradual transitions for developers. However, the graphics API landscape has been shifting, where some companies are moving away from OpenGL. Back in June 2018, Apple has deprecated OpenGL APIs on all of their platforms (iOS, macOS and tvOS), strongly encouraging developers to use their proprietary Metal API, which was introduced in 2014. Game developers have also begun to adopt newer APIs. id Software, who has been using OpenGL in their games since the late 1990s in games such as GLQuake or some games of the Doom franchise, transitioned away to its successor Vulkan in its id Tech 7 engine in 2016. They first supported Vulkan in an update for their id Tech 6 engine. The company's first licensed use of OpenGL was in its Quake II engine, also known as id Tech 2. In March 2023, Valve removed OpenGL support from Dota 2 in favor of Vulkan. Atypical Games, with support from Samsung, updated their game engine to use Vulkan, rather than OpenGL, across all non-Apple platforms. The Khronos Group, the consortium responsible for OpenGL's development, has stopped providing support for OpenGL. It has not received a number of modern graphics technologies, such as Ray Tracing, on-GPU video decoding, anti-aliasing algorithms with deep learning like as Nvidia DLSS and AMD FSR Google's Fuchsia OS, while using Vulkan natively and requiring a Vulkan-conformant GPU, still intends to support OpenGL on top of Vulkan via the ANGLE translation layer. Version history The first version of OpenGL, version 1.0, was released on June 30, 1992, by Mark Segal and Kurt Akeley. Since then, OpenGL has occasionally been extended by releasing a new version of the specification. Such releases define a baseline set of features which all conforming graphics cards must support, and against which new extensions can more easily be written. Each new version of OpenGL tends to incorporate several extensions which have widespread support among graphics-card vendors, although the details of those extensions may be changed. OpenGL 2.0 Release date: September 7, 2004 OpenGL 2.0 was originally conceived by 3Dlabs to address concerns that OpenGL was stagnating and lacked a strong direction. 3Dlabs proposed a number of major additions to the standard. Most of these were, at the time, rejected by the ARB or otherwise never came to fruition in the form that 3Dlabs proposed. However, their proposal for a C-style shading language was eventually completed, resulting in the current formulation of the OpenGL Shading Language (GLSL or GLslang). Like the assembly-like shading languages it was replacing, it allowed replacing the fixed-function vertex and fragment pipe with shaders, though this time written in a C-like high-level language. The design of GLSL was notable for making relatively few concessions to the limits of the hardware then available. This harked back to the earlier tradition of OpenGL setting an ambitious, forward-looking target for 3D accelerators rather than merely tracking the state of currently available hardware. The final OpenGL 2.0 specification includes support for GLSL. Longs Peak and OpenGL 3.0 Before the release of OpenGL 3.0, the new revision had the codename Longs Peak. At the time of its original announcement, Longs Peak was presented as the first major API revision in OpenGL's lifetime. It consisted of an overhaul to the way that OpenGL works, calling for fundamental changes to the API. The draft introduced a change to object management. The GL 2.1 object model was built upon the state-based design of OpenGL. That is, to modify an object or to use it, one needs to bind the object to the state system, then make modifications to the state or perform function calls that use the bound object. Because of OpenGL's use of a state system, objects must be mutable. That is, the basic structure of an object can change at any time, even if the rendering pipeline is asynchronously using that object. A texture object can be redefined from 2D to 3D. This requires any OpenGL implementations to add a degree of complexity to internal object management. Under the Longs Peak API, object creation would become atomic, using templates to define the properties of an object which would be created with one function call. The object could then be used immediately across multiple threads. Objects would also be immutable; however, they could have their contents changed and updated. For example, a texture could change its image, but its size and format could not be changed. To support backwards compatibility, the old state based API would still be available, but no new functionality would be exposed via the old API in later versions of OpenGL. This would have allowed legacy code bases, such as the majority of CAD products, to continue to run while other software could be written against or ported to the new API. Longs Peak was initially due to be finalized in September 2007 under the name OpenGL 3.0, but the Khronos Group announced on October 30 that it had run into several issues that it wished to address before releasing the specification. As a result, the spec was delayed, and the Khronos Group went into a media blackout until the release of the final OpenGL 3.0 spec. The final specification proved far less revolutionary than the Longs Peak proposal. Instead of removing all immediate mode and fixed functionality (non-shader mode), the spec included them as deprecated features. The proposed object model was not included, and no plans have been announced to include it in any future revisions. As a result, the API remained largely the same with a few existing extensions being promoted to core functionality. Among some developer groups this decision caused something of an uproar, with many developers professing that they would switch to DirectX in protest. Most complaints revolved around the lack of communication by Khronos to the development community and multiple features being discarded that were viewed favorably by many. Other frustrations included the requirement of DirectX 10 level hardware to use OpenGL 3.0 and the absence of geometry shaders and instanced rendering as core features. Other sources reported that the community reaction was not quite as severe as originally presented, with many vendors showing support for the update. OpenGL 3.0 Release date: August 11, 2008 OpenGL 3.0 introduced a deprecation mechanism to simplify future revisions of the API. Certain features, marked as deprecated, could be completely disabled by requesting a forward-compatible context from the windowing system. OpenGL 3.0 features could still be accessed alongside these deprecated features, however, by requesting a full context. Deprecated features include: All fixed-function vertex and fragment processing Direct-mode rendering, using glBegin and glEnd Display lists Indexed-color rendering targets OpenGL Shading Language versions 1.10 and 1.20 OpenGL 3.1 Release date: March 24, 2009 OpenGL 3.1 fully removed all of the features which were deprecated in version 3.0, with the exception of wide lines. From this version onwards, it's not possible to access new features using a full context, or to access deprecated features using a forward-compatible context. An exception to the former rule is made if the implementation supports the ARB_compatibility extension, but this is not guaranteed. Hardware support: Mesa supports ARM Panfrost with Version 21.0. OpenGL 3.2 Release date: August 3, 2009 OpenGL 3.2 further built on the deprecation mechanisms introduced by OpenGL 3.0, by dividing the specification into a core profile and compatibility profile. Compatibility contexts include the previously removed fixed-function APIs, equivalent to the ARB_compatibility extension released alongside OpenGL 3.1, while core contexts do not. OpenGL 3.2 also included an upgrade to GLSL version 1.50. OpenGL 3.3 Release date: March 11, 2010 Mesa supports software Driver SWR, softpipe and for older Nvidia cards with NV50. OpenGL 4.0 Release date: March 11, 2010 OpenGL 4.0 was released alongside version 3.3. It was designed for hardware able to support Direct3D 11. As in OpenGL 3.0, this version of OpenGL contains a high number of fairly inconsequential extensions, designed to thoroughly expose the abilities of Direct3D 11-class hardware. Only the most influential extensions are listed below. Hardware support: Nvidia GeForce 400 series and newer, AMD Radeon HD 5000 series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Ivy Bridge processors and newer. OpenGL 4.1 Release date: July 26, 2010 Hardware support: Nvidia GeForce 400 series and newer, AMD Radeon HD 5000 series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Ivy Bridge processors and newer. Minimum "maximum texture size" is 16,384 × 16,384 for GPUs implementing this specification. OpenGL 4.2 Release date: August 8, 2011 Support for shaders with atomic counters and load-store-atomic read-modify-write operations to one level of a texture Drawing multiple instances of data captured from GPU vertex processing (including tessellation), to enable complex objects to be efficiently repositioned and replicated Support for modifying an arbitrary subset of a compressed texture, without having to re-download the whole texture to the GPU for significant performance improvements Hardware support: Nvidia GeForce 400 series and newer, AMD Radeon HD 5000 series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), and Intel HD Graphics in Intel Haswell processors and newer. (Linux Mesa: Ivy Bridge and newer) OpenGL 4.3 Release date: August 6, 2012 Compute shaders leveraging GPU parallelism within the context of the graphics pipeline Shader storage buffer objects, allowing shaders to read and write buffer objects like image load/store from 4.2, but through the language rather than function calls. Image format parameter queries ETC2/EAC texture compression as a standard feature Full compatibility with OpenGL ES 3.0 APIs Debug abilities to receive debugging messages during application development Texture views to interpret textures in different ways without data replication Increased memory security and multi-application robustness Hardware support: AMD Radeon HD 5000 series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Haswell processors and newer. (Linux Mesa: Ivy Bridge without stencil texturing, Haswell and newer), Nvidia GeForce 400 series and newer. VIRGL Emulation for virtual machines supports 4.3+ with Mesa 20. OpenGL 4.4 Release date: July 22, 2013 Enforced buffer object usage controls Asynchronous queries into buffer objects Expression of more layout controls of interface variables in shaders Efficient binding of multiple objects simultaneously Hardware support: AMD Radeon HD 5000 series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Broadwell processors and newer (Linux Mesa: Haswell and newer), Nvidia GeForce 400 series and newer, Tegra K1. OpenGL 4.5 Release date: August 11, 2014 Direct State Access (DSA) – object accessors enable state to be queried and modified without binding objects to contexts, for increased application and middleware efficiency and flexibility. Flush Control – applications can control flushing of pending commands before context switching – enabling high-performance multithreaded applications; Robustness – providing a secure platform for applications such as WebGL browsers, including preventing a GPU reset affecting any other running applications; OpenGL ES 3.1 API and shader compatibility – to enable the easy development and execution of the latest OpenGL ES applications on desktop systems. Hardware support: AMD Radeon HD 5000 series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel HD Graphics in Intel Broadwell processors and newer (Linux Mesa: Haswell and newer), Nvidia GeForce 400 series and newer, Tegra K1, and Tegra X1. OpenGL 4.6 Release date: July 31, 2017 more efficient, GPU-sided, geometry processing more efficient shader execution () more information through statistics, overflow query and counters higher performance through no error handling contexts clamping of polygon offset function, solves a shadow rendering problem SPIR-V shaders Improved anisotropic filtering Hardware support: AMD Radeon HD 7000 series and newer (FP64 shaders implemented by emulation on some TeraScale GPUs), Intel Haswell and newer, Nvidia GeForce 400 series and newer. Driver support: Mesa 19.2 on Linux supports OpenGL 4.6 for Intel Broadwell and newer. Mesa 20.0 supports AMD Radeon GPUs, while support for Nvidia Kepler+ is in progress. Zink as Emulation Driver with 21.1 and software driver LLVMpipe also support with Mesa 21.0. AMD Adrenalin 18.4.1 Graphics Driver on Windows 7 SP1, 10 version 1803 (April 2018 update) for AMD Radeon HD 7700+, HD 8500+ and newer. Released April 2018. Intel 26.20.100.6861 graphics driver on Windows 10. Released May 2019. NVIDIA GeForce 397.31 Graphics Driver on Windows 7, 8, 10 x86-64 bit only, no 32-bit support. Released April 2018 Alternative implementations Apple deprecated OpenGL in iOS 12 and macOS 10.14 Mojave in favor of Metal, but it is still available as of macOS 14 Sonoma (including on Apple silicon devices). The latest version supported for OpenGL is 4.1 from 2011. A proprietary library from Molten – authors of MoltenVK – called MoltenGL, can translate OpenGL calls to Metal. There are several projects that attempt to implement OpenGL on top of Vulkan. The Vulkan backend for Google's ANGLE achieved OpenGL ES 3.1 conformance in July 2020. The Mesa3D project also includes such a driver, called Zink. Microsoft's Windows 11 on Arm added support for OpenGL 3.3 via GLon12, an open source OpenGL implementation on top DirectX 12 via Mesa Gallium. Vulkan Vulkan, formerly named the "Next Generation OpenGL Initiative" (glNext), is a ground-up redesign effort to unify OpenGL and OpenGL ES into one common API that will not be backwards compatible with existing OpenGL versions. The initial version of Vulkan API was released on February 16, 2016.
Technology
Software development: General
null
22498
https://en.wikipedia.org/wiki/Orbit
Orbit
In celestial mechanics, an orbit (also known as orbital revolution) is the curved trajectory of an object such as the trajectory of a planet around a star, or of a natural satellite around a planet, or of an artificial satellite around an object or position in space such as a planet, moon, asteroid, or Lagrange point. Normally, orbit refers to a regularly repeating trajectory, although it may also refer to a non-repeating trajectory. To a close approximation, planets and satellites follow elliptic orbits, with the center of mass being orbited at a focal point of the ellipse, as described by Kepler's laws of planetary motion. For most situations, orbital motion is adequately approximated by Newtonian mechanics, which explains gravity as a force obeying an inverse-square law. However, Albert Einstein's general theory of relativity, which accounts for gravity as due to curvature of spacetime, with orbits following geodesics, provides a more accurate calculation and understanding of the exact mechanics of orbital motion. History Historically, the apparent motions of the planets were described by European and Arabic philosophers using the idea of celestial spheres. This model posited the existence of perfect moving spheres or rings to which the stars and planets were attached. It assumed the heavens were fixed apart from the motion of the spheres and was developed without any understanding of gravity. After the planets' motions were more accurately measured, theoretical mechanisms such as deferent and epicycles were added. Although the model was capable of reasonably accurately predicting the planets' positions in the sky, more and more epicycles were required as the measurements became more accurate, hence the model became increasingly unwieldy. Originally geocentric, it was modified by Copernicus to place the Sun at the centre to help simplify the model. The model was further challenged during the 16th century, as comets were observed traversing the spheres. The basis for the modern understanding of orbits was first formulated by Johannes Kepler whose results are summarised in his three laws of planetary motion. First, he found that the orbits of the planets in our Solar System are elliptical, not circular (or epicyclic), as had previously been believed, and that the Sun is not located at the center of the orbits, but rather at one focus. Second, he found that the orbital speed of each planet is not constant, as had previously been thought, but rather that the speed depends on the planet's distance from the Sun. Third, Kepler found a universal relationship between the orbital properties of all the planets orbiting the Sun. For the planets, the cubes of their distances from the Sun are proportional to the squares of their orbital periods. Jupiter and Venus, for example, are respectively about 5.2 and 0.723 AU distant from the Sun, their orbital periods respectively about 11.86 and 0.615 years. The proportionality is seen by the fact that the ratio for Jupiter, 5.23/11.862, is practically equal to that for Venus, 0.7233/0.6152, in accord with the relationship. Idealised orbits meeting these rules are known as Kepler orbits. Isaac Newton demonstrated that Kepler's laws were derivable from his theory of gravitation and that, in general, the orbits of bodies subject to gravity were conic sections (this assumes that the force of gravity propagates instantaneously). Newton showed that, for a pair of bodies, the orbits' sizes are in inverse proportion to their masses, and that those bodies orbit their common center of mass. Where one body is much more massive than the other (as is the case of an artificial satellite orbiting a planet), it is a convenient approximation to take the center of mass as coinciding with the center of the more massive body. Advances in Newtonian mechanics were then used to explore variations from the simple assumptions behind Kepler orbits, such as the perturbations due to other bodies, or the impact of spheroidal rather than spherical bodies. Joseph-Louis Lagrange developed a new approach to Newtonian mechanics emphasizing energy more than force, and made progress on the three-body problem, discovering the Lagrangian points. In a dramatic vindication of classical mechanics, in 1846 Urbain Le Verrier was able to predict the position of Neptune based on unexplained perturbations in the orbit of Uranus. Albert Einstein in his 1916 paper The Foundation of the General Theory of Relativity explained that gravity was due to curvature of space-time and removed Newton's assumption that changes in gravity propagate instantaneously. This led astronomers to recognize that Newtonian mechanics did not provide the highest accuracy in understanding orbits. In relativity theory, orbits follow geodesic trajectories which are usually approximated very well by the Newtonian predictions (except where there are very strong gravity fields and very high speeds) but the differences are measurable. Essentially all the experimental evidence that can distinguish between the theories agrees with relativity theory to within experimental measurement accuracy. The original vindication of general relativity is that it was able to account for the remaining unexplained amount in precession of Mercury's perihelion first noted by Le Verrier. However, Newton's solution is still used for most short term purposes since it is significantly easier to use and sufficiently accurate. Planetary orbits Within a planetary system, planets, dwarf planets, asteroids and other minor planets, comets, and space debris orbit the system's barycenter in elliptical orbits. A comet in a parabolic or hyperbolic orbit about a barycenter is not gravitationally bound to the star and therefore is not considered part of the star's planetary system. Bodies that are gravitationally bound to one of the planets in a planetary system, either natural or artificial satellites, follow orbits about a barycenter near or within that planet. Owing to mutual gravitational perturbations, the eccentricities of the planetary orbits vary over time. Mercury, the smallest planet in the Solar System, has the most eccentric orbit. At the present epoch, Mars has the next largest eccentricity while the smallest orbital eccentricities are seen with Venus and Neptune. As two objects orbit each other, the periapsis is that point at which the two objects are closest to each other and the apoapsis is that point at which they are the farthest. (More specific terms are used for specific bodies. For example, perigee and apogee are the lowest and highest parts of an orbit around Earth, while perihelion and aphelion are the closest and farthest points of an orbit around the Sun.) In the case of planets orbiting a star, the mass of the star and all its satellites are calculated to be at a single point called the barycenter. The paths of all the star's satellites are elliptical orbits about that barycenter. Each satellite in that system will have its own elliptical orbit with the barycenter at one focal point of that ellipse. At any point along its orbit, any satellite will have a certain value of kinetic and potential energy with respect to the barycenter, and the sum of those two energies is a constant value at every point along its orbit. As a result, as a planet approaches periapsis, the planet will increase in speed as its potential energy decreases; as a planet approaches apoapsis, its velocity will decrease as its potential energy increases. Principles There are a few common ways of understanding orbits: A force, such as gravity, pulls an object into a curved path as it attempts to fly off in a straight line. As the object is pulled toward the massive body, it falls toward that body. However, if it has enough tangential velocity it will not fall into the body but will instead continue to follow the curved trajectory caused by that body indefinitely. The object is then said to be orbiting the body. The velocity relationship of two moving objects with mass can thus be considered in four practical classes, with subtypes: No orbit Suborbital trajectories Range of interrupted elliptical paths Orbital trajectories (or simply, orbits) Open (or escape) trajectories Orbital rockets are launched vertically at first to lift the rocket above the atmosphere (which causes frictional drag), and then slowly pitch over and finish firing the rocket engine parallel to the atmosphere to achieve orbit speed. Once in orbit, their speed keeps them in orbit above the atmosphere. If e.g., an elliptical orbit dips into dense air, the object will lose speed and re-enter (i.e. fall). Occasionally a space craft will intentionally intercept the atmosphere, in an act commonly referred to as an aerobraking maneuver. Illustration As an illustration of an orbit around a planet, the Newton's cannonball model may prove useful (see image below). This is a 'thought experiment', in which a cannon on top of a tall mountain is able to fire a cannonball horizontally at any chosen muzzle speed. The effects of air friction on the cannonball are ignored (or perhaps the mountain is high enough that the cannon is above the Earth's atmosphere, which is the same thing). If the cannon fires its ball with a low initial speed, the trajectory of the ball curves downward and hits the ground (A). As the firing speed is increased, the cannonball hits the ground farther (B) away from the cannon, because while the ball is still falling towards the ground, the ground is increasingly curving away from it (see first point, above). All these motions are actually "orbits" in a technical sense—they are describing a portion of an elliptical path around the center of gravity—but the orbits are interrupted by striking the Earth. If the cannonball is fired with sufficient speed, the ground curves away from the ball at least as much as the ball falls—so the ball never strikes the ground. It is now in what could be called a non-interrupted or circumnavigating, orbit. For any specific combination of height above the center of gravity and mass of the planet, there is one specific firing speed (unaffected by the mass of the ball, which is assumed to be very small relative to the Earth's mass) that produces a circular orbit, as shown in (C). As the firing speed is increased beyond this, non-interrupted elliptic orbits are produced; one is shown in (D). If the initial firing is above the surface of the Earth as shown, there will also be non-interrupted elliptical orbits at slower firing speed; these will come closest to the Earth at the point half an orbit beyond, and directly opposite the firing point, below the circular orbit. At a specific horizontal firing speed called escape velocity, dependent on the mass of the planet and the distance of the object from the barycenter, an open orbit (E) is achieved that has a parabolic path. At even greater speeds the object will follow a range of hyperbolic trajectories. In a practical sense, both of these trajectory types mean the object is "breaking free" of the planet's gravity, and "going off into space" never to return. Newton's laws of motion Newton's law of gravitation and laws of motion for two-body problems In most situations, relativistic effects can be neglected, and Newton's laws give a sufficiently accurate description of motion. The acceleration of a body is equal to the sum of the forces acting on it, divided by its mass, and the gravitational force acting on a body is proportional to the product of the masses of the two attracting bodies and decreases inversely with the square of the distance between them. To this Newtonian approximation, for a system of two-point masses or spherical bodies, only influenced by their mutual gravitation (called a two-body problem), their trajectories can be exactly calculated. If the heavier body is much more massive than the smaller, as in the case of a satellite or small moon orbiting a planet or for the Earth orbiting the Sun, it is accurate enough and convenient to describe the motion in terms of a coordinate system that is centered on the heavier body, and we say that the lighter body is in orbit around the heavier. For the case where the masses of two bodies are comparable, an exact Newtonian solution is still sufficient and can be had by placing the coordinate system at the center of the mass of the system. Defining gravitational potential energy Energy is associated with gravitational fields. A stationary body far from another can do external work if it is pulled towards it, and therefore has gravitational potential energy. Since work is required to separate two bodies against the pull of gravity, their gravitational potential energy increases as they are separated, and decreases as they approach one another. For point masses, the gravitational energy decreases to zero as they approach zero separation. It is convenient and conventional to assign the potential energy as having zero value when they are an infinite distance apart, and hence it has a negative value (since it decreases from zero) for smaller finite distances. Orbital energies and orbit shapes When only two gravitational bodies interact, their orbits follow a conic section. The orbit can be open (implying the object never returns) or closed (returning). Which it is depends on the total energy (kinetic + potential energy) of the system. In the case of an open orbit, the speed at any position of the orbit is at least the escape velocity for that position, in the case of a closed orbit, the speed is always less than the escape velocity. Since the kinetic energy is never negative if the common convention is adopted of taking the potential energy as zero at infinite separation, the bound orbits will have negative total energy, the parabolic trajectories zero total energy, and hyperbolic orbits positive total energy. An open orbit will have a parabolic shape if it has the velocity of exactly the escape velocity at that point in its trajectory, and it will have the shape of a hyperbola when its velocity is greater than the escape velocity. When bodies with escape velocity or greater approach each other, they will briefly curve around each other at the time of their closest approach, and then separate, forever. All closed orbits have the shape of an ellipse. A circular orbit is a special case, wherein the foci of the ellipse coincide. The point where the orbiting body is closest to Earth is called the perigee, and when orbiting a body other than earth it is called the periapsis (less properly, "perifocus" or "pericentron"). The point where the satellite is farthest from Earth is called the apogee, apoapsis, or sometimes apifocus or apocentron. A line drawn from periapsis to apoapsis is the line-of-apsides. This is the major axis of the ellipse, the line through its longest part. Kepler's laws Bodies following closed orbits repeat their paths with a certain time called the period. This motion is described by the empirical laws of Kepler, which can be mathematically derived from Newton's laws. These can be formulated as follows: The orbit of a planet around the Sun is an ellipse, with the Sun in one of the focal points of that ellipse. [This focal point is actually the barycenter of the Sun-planet system; for simplicity, this explanation assumes the Sun's mass is infinitely larger than that planet's.] The planet's orbit lies in a plane, called the orbital plane. The point on the orbit closest to the attracting body is the periapsis. The point farthest from the attracting body is called the apoapsis. There are also specific terms for orbits about particular bodies; things orbiting the Sun have a perihelion and aphelion, things orbiting the Earth have a perigee and apogee, and things orbiting the Moon have a perilune and apolune (or periselene and aposelene respectively). An orbit around any star, not just the Sun, has a periastron and an apastron. As the planet moves in its orbit, the line from the Sun to the planet sweeps a constant area of the orbital plane for a given period of time, regardless of which part of its orbit the planet traces during that period of time. This means that the planet moves faster near its perihelion than near its aphelion, because at the smaller distance it needs to trace a greater arc to cover the same area. This law is usually stated as "equal areas in equal time." For a given orbit, the ratio of the cube of its semi-major axis to the square of its period is constant. Limitations of Newton's law of gravitation Note that while bound orbits of a point mass or a spherical body with a Newtonian gravitational field are closed ellipses, which repeat the same path exactly and indefinitely, any non-spherical or non-Newtonian effects (such as caused by the slight oblateness of the Earth, or by relativistic effects, thereby changing the gravitational field's behavior with distance) will cause the orbit's shape to depart from the closed ellipses characteristic of Newtonian two-body motion. The two-body solutions were published by Newton in Principia in 1687. In 1912, Karl Fritiof Sundman developed a converging infinite series that solves the three-body problem; however, it converges too slowly to be of much use. Except for special cases like the Lagrangian points, no method is known to solve the equations of motion for a system with four or more bodies. Approaches to many-body problems Rather than an exact closed form solution, orbits with many bodies can be approximated with arbitrarily high accuracy. These approximations take two forms: One form takes the pure elliptic motion as a basis and adds perturbation terms to account for the gravitational influence of multiple bodies. This is convenient for calculating the positions of astronomical bodies. The equations of motion of the moons, planets, and other bodies are known with great accuracy, and are used to generate tables for celestial navigation. Still, there are secular phenomena that have to be dealt with by post-Newtonian methods. The differential equation form is used for scientific or mission-planning purposes. According to Newton's laws, the sum of all the forces acting on a body will equal the mass of the body times its acceleration (F = ma). Therefore accelerations can be expressed in terms of positions. The perturbation terms are much easier to describe in this form. Predicting subsequent positions and velocities from initial values of position and velocity corresponds to solving an initial value problem. Numerical methods calculate the positions and velocities of the objects a short time in the future, then repeat the calculation ad nauseam. However, tiny arithmetic errors from the limited accuracy of a computer's math are cumulative, which limits the accuracy of this approach. Differential simulations with large numbers of objects perform the calculations in a hierarchical pairwise fashion between centers of mass. Using this scheme, galaxies, star clusters and other large assemblages of objects have been simulated. Formulation Newtonian analysis of orbital motion The following derivation applies to such an elliptical orbit. We start only with the Newtonian law of gravitation stating that the gravitational acceleration towards the central body is related to the inverse of the square of the distance between them, namely where F2 is the force acting on the mass m2 caused by the gravitational attraction mass m1 has for m2, G is the universal gravitational constant, and r is the distance between the two masses centers. From Newton's Second Law, the summation of the forces acting on m2 related to that body's acceleration: where A2 is the acceleration of m2 caused by the force of gravitational attraction F2 of m1 acting on m2. Combining Eq. 1 and 2: Solving for the acceleration, A2: where is the standard gravitational parameter, in this case . It is understood that the system being described is m2, hence the subscripts can be dropped. We assume that the central body is massive enough that it can be considered to be stationary and we ignore the more subtle effects of general relativity. When a pendulum or an object attached to a spring swings in an ellipse, the inward acceleration/force is proportional to the distance Due to the way vectors add, the component of the force in the or in the directions are also proportionate to the respective components of the distances, . Hence, the entire analysis can be done separately in these dimensions. This results in the harmonic parabolic equations and of the ellipse. The location of the orbiting object at the current time is located in the plane using vector calculus in polar coordinates both with the standard Euclidean basis and with the polar basis with the origin coinciding with the center of force. Let be the distance between the object and the center and be the angle it has rotated. Let and be the standard Euclidean bases and let and be the radial and transverse polar basis with the first being the unit vector pointing from the central body to the current location of the orbiting object and the second being the orthogonal unit vector pointing in the direction that the orbiting object would travel if orbiting in a counter clockwise circle. Then the vector to the orbiting object is We use and to denote the standard derivatives of how this distance and angle change over time. We take the derivative of a vector to see how it changes over time by subtracting its location at time from that at time and dividing by . The result is also a vector. Because our basis vector moves as the object orbits, we start by differentiating it. From time to , the vector keeps its beginning at the origin and rotates from angle to which moves its head a distance in the perpendicular direction giving a derivative of . We can now find the velocity and acceleration of our orbiting object. The coefficients of and give the accelerations in the radial and transverse directions. As said, Newton gives this first due to gravity is and the second is zero. Equation (2) can be rearranged using integration by parts. We can multiply through by because it is not zero unless the orbiting object crashes. Then having the derivative be zero gives that the function is a constant. which is actually the theoretical proof of Kepler's second law (A line joining a planet and the Sun sweeps out equal areas during equal intervals of time). The constant of integration, h, is the angular momentum per unit mass. In order to get an equation for the orbit from equation (1), we need to eliminate time. (
Physical sciences
Astronomy
null
22513
https://en.wikipedia.org/wiki/Ogg
Ogg
Ogg is a free, open container format maintained by the Xiph.Org Foundation. The authors of the Ogg format state that it is unrestricted by software patents and is designed to provide for efficient streaming and manipulation of high-quality digital multimedia. Its name is derived from "ogging", jargon from the computer game Netrek. The Ogg container format can multiplex a number of independent streams for audio, video, text (such as subtitles), and metadata. In the Ogg multimedia framework, Theora provides a lossy video layer. The audio layer is most commonly provided by the music-oriented Vorbis format or its successor Opus. Lossless audio compression formats include FLAC, and OggPCM. Before 2007, the .ogg filename extension was used for all files whose content used the Ogg container format. Since 2007, the Xiph.Org Foundation recommends that .ogg only be used for Ogg Vorbis audio files. The Xiph.Org Foundation decided to create a new set of file extensions and media types to describe different types of content such as .oga for audio only files, .ogv for video with or without sound (including Theora), and .ogx for multiplexed Ogg. As of November 7, 2017, the current version of the Xiph.Org Foundation's reference implementation is libogg 1.3.3. Another version, libogg2, has been in development, but is awaiting a rewrite as of 2018. Both software libraries are free software, released under the New BSD License. Ogg reference implementation was separated from Vorbis on September 2, 2000. Ogg's various codecs have been incorporated into a number of different free and proprietary media players, both commercial and non-commercial, as well as portable media players and GPS receivers from different manufacturers. Naming The Ogg Vorbis project started in 1993. It was originally named "Squish" but that name was already trademarked, so the project underwent a name change. The new name, "OggSquish", was used until 2001 when it was changed again to "Ogg". Ogg has since come to refer to the container format, which is now part of the larger Xiph.org multimedia project. Today, "Squish" (now known as "Vorbis") refers to a particular audio coding format typically used with the Ogg container format. Ogg is derived from "ogging", jargon from the computer game Netrek, which came to mean doing something forcefully, possibly without consideration of the drain on future resources. At its inception, the Ogg project was thought by the authors to be somewhat ambitious given the limited power of the PC hardware of the time. Although the name Ogg is unrelated to the character Nanny Ogg in Terry Pratchett's Discworld novels, "Vorbis" is named after another Terry Pratchett character from the book Small Gods. File format The "Ogg" bitstream format, designed principally by the Xiph.Org Foundation, has been developed as the framework of a larger initiative aimed at producing a set of components for the coding and decoding of multimedia files, which are available free of charge and freely re-implementable in software and hardware. The format consists of chunks of data each called an "Ogg page". Each page begins with the characters "OggS" to identify the file as Ogg format. A "serial number" and "page number" in the page header identifies each page as part of a series of pages making up a bitstream. Multiple bitstreams may be multiplexed in the file where pages from each bitstream are ordered by the seek time of the contained data. Bitstreams may also be appended to existing files, a process known as "chaining", to cause the bitstreams to be decoded in sequence. A BSD-licensed library, called "libvorbis", is available to encode and decode data from "Vorbis" streams. Independent Ogg implementations are used in several projects such as RealPlayer and a set of DirectShow filters. Mogg, the "Multi-Track-Single-Logical-Stream Ogg-Vorbis", is the multi-channel or multi-track Ogg file format. Page structure The following is the field layout of an Ogg page header: Capture pattern – 32 bits The capture pattern or sync code is a magic number used to ensure synchronization when parsing Ogg files. Every page starts with the four ASCII character sequence, "OggS". This assists in resynchronizing a parser in cases where data has been lost or is corrupted, and is a sanity check before commencing parsing of the page structure. Version – 8 bits This field indicates the version of the Ogg bitstream format, to allow for future expansion. It is currently mandated to be 0. Header type – 8 bits This is an 8 bit field of flags, which indicates the type of page that follows. {| class="wikitable" ! style="width:5%;"|Bit ! style="width:10%;"|Value ! style="width:10%;"|Flag ! style="width:75%;"|Page type |- |0 |0x01 |Continuation |The first packet on this page is a continuation of the previous packet in the logical bitstream. |- |1 |0x02 |BOS |Beginning Of Stream. This page is the first page in the logical bitstream. The BOS flag must be set on the first page of every logical bitstream, and must not be set on any other page. |- |2 |0x04 |EOS |End Of Stream. This page is the last page in the logical bitstream. The EOS flag must be set on the final page of every logical bitstream, and must not be set on any other page. |} Granule position – 64 bits A granule position is the time marker in Ogg files. It is an abstract value, whose meaning is determined by the codec. It may, for example, be a count of the number of samples, the number of frames or a more complex scheme. Bitstream serial number – 32 bits This field is a serial number that identifies a page as belonging to a particular logical bitstream. Each logical bitstream in a file has a unique value, and this field allows implementations to deliver the pages to the appropriate decoder. In a typical Vorbis and Theora file, one stream is the audio (Vorbis), and the other is the video (Theora) Page sequence number – 32 bits This field is a monotonically increasing field for each logical bitstream. The first page is 0, the second 1, etc. This allows implementations to detect when data has been lost. Checksum – 32 bits This field provides a CRC32 checksum of the data in the entire page (including the page header, calculated with the checksum field set to 0). This allows verification that the data has not been corrupted since it was authored. Pages that fail the checksum should be discarded. The checksum is generated using a polynomial value of 0x04C11DB7. Page segments – 8 bits This field indicates the number of segments that exist in this page. It also indicates how many bytes are in the segment table that follows this field. There can be a maximum of 255 segments in any one page. Segment table The segment table is an array of 8-bit values, each indicating the length of the corresponding segment within the page body. The number of segments is determined from the preceding page segments field. Each segment is between 0 and 255 bytes in length. The segments provide a way to group segments into packets, which are meaningful units of data for the decoder. When the segment's length is indicated to be 255, this indicates that the following segment is to be concatenated to this one and is part of the same packet. When the segment's length is 0–254, this indicates that this segment is the final segment in this packet. Where a packet's length is a multiple of 255, the final segment is length 0. Where the final packet continues on the next page, the final segment value is 255, and the continuation flag is set on the following page to indicate that the start of the new page is a continuation of last page. Metadata VorbisComment is a base-level Metadata format initially authored for use with Ogg Vorbis. It has since been adopted in the specifications of Ogg encapsulations for other Xiph.Org codecs including Theora, Speex, FLAC and Opus. VorbisComment is the simplest and most widely supported mechanism for storing metadata with Xiph.Org codecs. Notably, one or more METADATA_BLOCK_PICTURE=... in a VorbisComment for thumbnails and cover art have Base64-encoded values of the corresponding FLAC METADATA_BLOCK_PICTURE. In other words, FLAC stores thumbnails and cover art in binary blocks—outside of the FLAC tags in a little-endian METADATA_BLOCK_VORBIS_COMMENT. Other existing and proposed mechanisms are: FLAC metadata blocks Ogg Skeleton Continuous Media Markup Language (deprecated) History The Ogg project began with a simple audio compression package as part of a larger project in 1993. The software was originally named Squish but due to an existing trade mark it was renamed to OggSquish. This name was later used for the whole Ogg project. In 1997, the Xiphophorus OggSquish was described as "an attempt both to create a flexible compressed audio format for modern audio applications as well as to provide the first audio format that is common on any and every modern computer platform". The OggSquish was in 2000 referred to as "a group of several related multimedia and signal processing projects". In 2000, two projects were in active development for planned release: Ogg Vorbis format and libvorbis—the reference implementation of Vorbis. Research also included work on future video and lossless audio coding. In 2001, OggSquish was renamed to Ogg and it was described as "the umbrella for a group of several related multimedia and signal processing projects". Ogg has come to stand for the file format, as part of the larger Xiph.org multimedia project. Squish became just the name of one of the Ogg codecs. In 2009, Ogg is described as "a multimedia container format, and the native file and stream format for the Xiph.org multimedia codecs". The Ogg reference implementation was separated from Vorbis on September 2, 2000. In May 2003, two Internet RFCs were published relating to the format. The Ogg bitstream was defined in (which is classified as 'informative') and its Internet content type (application/ogg) in (which is, , a proposed standard protocol). In September 2008, RFC 3534 was obsoleted by , which added content types video/ogg, audio/ogg and filename extensions .ogx, .ogv, .oga, .spx. OGM In 2002, the lack of formal video support in Ogg resulted in the development of the OGM file format, a hack on Ogg that allowed embedding of video from the Microsoft DirectShow framework into an Ogg-based wrapper. OGM was initially supported only by closed source Windows-only tools, but the codebase was subsequently opened. Later, video (and subtitle) support were formally specified for Ogg but in a manner incompatible with OGM. Independently, the Matroska container format reached maturity and provided an alternative for people interested in combining Vorbis audio and arbitrary video codecs. As a result, OGM is no longer supported or developed and is formally discouraged by Xiph.org. Today, video in Ogg is found with the .ogv file extension, which is formally specified and officially supported. Software and codecs that support .ogm files are available without charge. 2006 Although Ogg had not reached anywhere near the ubiquity of the MPEG standards (e.g., MP3/MP4), , it was commonly used to encode free content (such as free music, multimedia on Wikimedia Foundation projects and Creative Commons files) and had started to be supported by a significant minority of digital audio players. Also supporting the Ogg format were many popular video game engines, including Doom 3, Unreal Tournament 2004, Halo: Combat Evolved, Jets'n'Guns, Mafia: The City of Lost Heaven, Myst IV: Revelation, StepMania, Serious Sam: The Second Encounter, Lineage 2, Vendetta Online, Battlefield 2, and the Grand Theft Auto engines, as well as the audio files of the Java-based game, Minecraft. The more popular Vorbis codec had built-in support on many software players, and extensions were available for nearly all the rest. 2007 On May 16, 2007, the Free Software Foundation started a campaign to increase the use of Vorbis "as an ethically, legally and technically superior audio alternative to the proprietary MP3 format". People were also encouraged to support the campaign by adding a web button to their website or blog. For those who did not want to download and use the FSF's suggested Ogg player (VLC), the Xiph.Org Foundation had an official codec for QuickTime-based applications in Windows and Mac OS X, such as iTunes players and iMovie applications; and Windows users could install a Windows Media Player Ogg codec. 2009 By June 30, 2009, the Ogg container, through the use of the Theora and Vorbis, was the only container format included in Firefox 3.5 web browser's implementation of the HTML5 <video> and <audio> elements. This was in accordance with the original recommendation outlined in, but later removed from, the HTML5 draft specification (see Ogg controversy). 2010 On March 3, 2010, a technical analysis by an FFmpeg developer was critical about the general purpose abilities of Ogg as a multimedia container format. The author of Ogg later responded to these claims in an article of his own. Ogg codecs Ogg is only a container format. The actual audio or video encoded by a codec is stored inside an Ogg container. Ogg containers may contain streams encoded with multiple codecs; for example, a video file with sound contains data encoded by both an audio codec and a video codec. Being a container format, Ogg can embed audio and video in various formats (such as Dirac, MNG, CELT, MPEG-4, MP3 and others) but Ogg was intended to be, and usually is, used with the following Xiph.org free codecs: Audio Lossy Speex: handles voice data at low bitrates (~2.1–32 kbit/s/channel) Vorbis: handles general audio data at mid to high-level variable bitrates (≈16–500 kbit/s per channel) Opus: handles voice, music and generic audio at low and high variable bitrates (≈6–510 kbit/s per channel) Lossless FLAC handles archival and high-fidelity audio data. OggPCM allows storing standard uncompressed PCM audio in an Ogg container Video Lossy Theora: based upon On2's VP3, it is targeted at competing with MPEG-4 video (for example, encoded with DivX or Xvid), RealVideo, or Windows Media Video. Daala: a video coding format under development. Tarkin: an experimental and now obsolete video codec developed in 2000, 2001 and 2002 utilizing discrete wavelet transforms in the three dimensions of width, height, and time. It has been put on hold after Theora became the main focus for video encoding (in August 2002). Dirac: a free and open video format developed by the BBC. Uses wavelet encoding. Lossless Dirac: a part of the specification of dirac covers lossless compression. Daala: a video coding format under development. Text Continuous Media Markup Language: a text/application codec for timed metadata, captioning, and formatting. Annodex: A free and open source set of standards developed by CSIRO to annotate and index networked media. OggKate: An overlay codec, originally designed for karaoke and text, that can be multiplexed in Ogg. Media types Ogg audio media is registered as IANA media type audio/ogg with file extensions .oga, .ogg, and .spx. It is a proper subset of the Ogg video media type video/ogg with file extension .ogv. Other Ogg applications use media type application/ogg with file extension .ogx; this is a superset of video/ogg. The Opus media type audio/opus with file extension .opus was registered later in RFC and .
Technology
File formats
null
22522
https://en.wikipedia.org/wiki/Oscillation
Oscillation
Oscillation is the repetitive or periodic variation, typically in time, of some measure about a central value (often a point of equilibrium) or between two or more different states. Familiar examples of oscillation include a swinging pendulum and alternating current. Oscillations can be used in physics to approximate complex interactions, such as those between atoms. Oscillations occur not only in mechanical systems but also in dynamic systems in virtually every area of science: for example the beating of the human heart (for circulation), business cycles in economics, predator–prey population cycles in ecology, geothermal geysers in geology, vibration of strings in guitar and other string instruments, periodic firing of nerve cells in the brain, and the periodic swelling of Cepheid variable stars in astronomy. The term vibration is precisely used to describe a mechanical oscillation. Oscillation, especially rapid oscillation, may be an undesirable phenomenon in process control and control theory (e.g. in sliding mode control), where the aim is convergence to stable state. In these cases it is called chattering or flapping, as in valve chatter, and route flapping. Simple harmonic oscillation The simplest mechanical oscillating system is a weight attached to a linear spring subject to only weight and tension. Such a system may be approximated on an air table or ice surface. The system is in an equilibrium state when the spring is static. If the system is displaced from the equilibrium, there is a net restoring force on the mass, tending to bring it back to equilibrium. However, in moving the mass back to the equilibrium position, it has acquired momentum which keeps it moving beyond that position, establishing a new restoring force in the opposite sense. If a constant force such as gravity is added to the system, the point of equilibrium is shifted. The time taken for an oscillation to occur is often referred to as the oscillatory period. The systems where the restoring force on a body is directly proportional to its displacement, such as the dynamics of the spring-mass system, are described mathematically by the simple harmonic oscillator and the regular periodic motion is known as simple harmonic motion. In the spring-mass system, oscillations occur because, at the static equilibrium displacement, the mass has kinetic energy which is converted into potential energy stored in the spring at the extremes of its path. The spring-mass system illustrates some common features of oscillation, namely the existence of an equilibrium and the presence of a restoring force which grows stronger the further the system deviates from equilibrium. In the case of the spring-mass system, Hooke's law states that the restoring force of a spring is: By using Newton's second law, the differential equation can be derived: where The solution to this differential equation produces a sinusoidal position function: where is the frequency of the oscillation, is the amplitude, and is the phase shift of the function. These are determined by the initial conditions of the system. Because cosine oscillates between 1 and −1 infinitely, our spring-mass system would oscillate between the positive and negative amplitude forever without friction. Two-dimensional oscillators In two or three dimensions, harmonic oscillators behave similarly to one dimension. The simplest example of this is an isotropic oscillator, where the restoring force is proportional to the displacement from equilibrium with the same restorative constant in all directions. This produces a similar solution, but now there is a different equation for every direction. Anisotropic oscillators With anisotropic oscillators, different directions have different constants of restoring forces. The solution is similar to isotropic oscillators, but there is a different frequency in each direction. Varying the frequencies relative to each other can produce interesting results. For example, if the frequency in one direction is twice that of another, a figure eight pattern is produced. If the ratio of frequencies is irrational, the motion is quasiperiodic. This motion is periodic on each axis, but is not periodic with respect to r, and will never repeat. Damped oscillations All real-world oscillator systems are thermodynamically irreversible. This means there are dissipative processes such as friction or electrical resistance which continually convert some of the energy stored in the oscillator into heat in the environment. This is called damping. Thus, oscillations tend to decay with time unless there is some net source of energy into the system. The simplest description of this decay process can be illustrated by oscillation decay of the harmonic oscillator. Damped oscillators are created when a resistive force is introduced, which is dependent on the first derivative of the position, or in this case velocity. The differential equation created by Newton's second law adds in this resistive force with an arbitrary constant . This example assumes a linear dependence on velocity. This equation can be rewritten as before: where . This produces the general solution: where . The exponential term outside of the parenthesis is the decay function and is the damping coefficient. There are 3 categories of damped oscillators: under-damped, where ; over-damped, where ; and critically damped, where . Driven oscillations In addition, an oscillating system may be subject to some external force, as when an AC circuit is connected to an outside power source. In this case the oscillation is said to be driven. The simplest example of this is a spring-mass system with a sinusoidal driving force. where This gives the solution: where and The second term of is the transient solution to the differential equation. The transient solution can be found by using the initial conditions of the system. Some systems can be excited by energy transfer from the environment. This transfer typically occurs where systems are embedded in some fluid flow. For example, the phenomenon of flutter in aerodynamics occurs when an arbitrarily small displacement of an aircraft wing (from its equilibrium) results in an increase in the angle of attack of the wing on the air flow and a consequential increase in lift coefficient, leading to a still greater displacement. At sufficiently large displacements, the stiffness of the wing dominates to provide the restoring force that enables an oscillation. Resonance Resonance occurs in a damped driven oscillator when ω = ω0, that is, when the driving frequency is equal to the natural frequency of the system. When this occurs, the denominator of the amplitude is minimized, which maximizes the amplitude of the oscillations. Coupled oscillations The harmonic oscillator and the systems it models have a single degree of freedom. More complicated systems have more degrees of freedom, for example, two masses and three springs (each mass being attached to fixed points and to each other). In such cases, the behavior of each variable influences that of the others. This leads to a coupling of the oscillations of the individual degrees of freedom. For example, two pendulum clocks (of identical frequency) mounted on a common wall will tend to synchronise. This phenomenon was first observed by Christiaan Huygens in 1665. The apparent motions of the compound oscillations typically appears very complicated but a more economic, computationally simpler and conceptually deeper description is given by resolving the motion into normal modes. The simplest form of coupled oscillators is a 3 spring, 2 mass system, where masses and spring constants are the same. This problem begins with deriving Newton's second law for both masses. The equations are then generalized into matrix form. where , , and The values of and can be substituted into the matrices. These matrices can now be plugged into the general solution. The determinant of this matrix yields a quadratic equation. Depending on the starting point of the masses, this system has 2 possible frequencies (or a combination of the two). If the masses are started with their displacements in the same direction, the frequency is that of a single mass system, because the middle spring is never extended. If the two masses are started in opposite directions, the second, faster frequency is the frequency of the system. More special cases are the coupled oscillators where energy alternates between two forms of oscillation. Well-known is the Wilberforce pendulum, where the oscillation alternates between the elongation of a vertical spring and the rotation of an object at the end of that spring. Coupled oscillators are a common description of two related, but different phenomena. One case is where both oscillations affect each other mutually, which usually leads to the occurrence of a single, entrained oscillation state, where both oscillate with a compromise frequency. Another case is where one external oscillation affects an internal oscillation, but is not affected by this. In this case the regions of synchronization, known as Arnold Tongues, can lead to highly complex phenomena as for instance chaotic dynamics. Small oscillation approximation In physics, a system with a set of conservative forces and an equilibrium point can be approximated as a harmonic oscillator near equilibrium. An example of this is the Lennard-Jones potential, where the potential is given by: The equilibrium points of the function are then found: The second derivative is then found, and used to be the effective potential constant: The system will undergo oscillations near the equilibrium point. The force that creates these oscillations is derived from the effective potential constant above: This differential equation can be re-written in the form of a simple harmonic oscillator: Thus, the frequency of small oscillations is: Or, in general form This approximation can be better understood by looking at the potential curve of the system. By thinking of the potential curve as a hill, in which, if one placed a ball anywhere on the curve, the ball would roll down with the slope of the potential curve. This is true due to the relationship between potential energy and force. By thinking of the potential in this way, one will see that at any local minimum there is a "well" in which the ball would roll back and forth (oscillate) between and . This approximation is also useful for thinking of Kepler orbits. Continuous system – waves As the number of degrees of freedom becomes arbitrarily large, a system approaches continuity; examples include a string or the surface of a body of water. Such systems have (in the classical limit) an infinite number of normal modes and their oscillations occur in the form of waves that can characteristically propagate. Mathematics The mathematics of oscillation deals with the quantification of the amount that a sequence or function tends to move between extremes. There are several related notions: oscillation of a sequence of real numbers, oscillation of a real-valued function at a point, and oscillation of a function on an interval (or open set). Examples Mechanical Double pendulum Foucault pendulum Helmholtz resonator Oscillations in the Sun (helioseismology), stars (asteroseismology) and Neutron-star oscillations. Quantum harmonic oscillator Playground swing String instruments Torsional vibration Tuning fork Vibrating string Wilberforce pendulum Lever escapement Electrical Alternating current Armstrong (or Tickler or Meissner) oscillator Astable multivibrator Blocking oscillator Butler oscillator Clapp oscillator Colpitts oscillator Delay-line oscillator Electronic oscillator Extended interaction oscillator Hartley oscillator Oscillistor Phase-shift oscillator Pierce oscillator Relaxation oscillator RLC circuit Royer oscillator Vačkář oscillator Wien bridge oscillator Electro-mechanical Crystal oscillator Optical Laser (oscillation of electromagnetic field with frequency of order 1015 Hz) Oscillator Toda or self-pulsation (pulsation of output power of laser at frequencies 104 Hz – 106 Hz in the transient regime) Quantum oscillator may refer to an optical local oscillator, as well as to a usual model in quantum optics. Biological Circadian rhythm Bacterial Circadian Rhythms Circadian oscillator Lotka–Volterra equation Neural oscillation Oscillating gene Segmentation clock Human oscillation Neural oscillation Insulin release oscillations gonadotropin releasing hormone pulsations Pilot-induced oscillation Voice production Economic and social Business cycle Generation gap Malthusian economics News cycle Climate and geophysics Atlantic multidecadal oscillation Chandler wobble Climate oscillation El Niño-Southern Oscillation Pacific decadal oscillation Quasi-biennial oscillation Astrophysics Neutron stars Cyclic Model Quantum mechanical Neutral particle oscillation, e.g. neutrino oscillations Quantum harmonic oscillator Chemical Belousov–Zhabotinsky reaction Mercury beating heart Briggs–Rauscher reaction Bray–Liebhafsky reaction Computing Cellular Automata oscillator
Physical sciences
Basics_10
null
22526
https://en.wikipedia.org/wiki/Organometallic%20chemistry
Organometallic chemistry
Organometallic chemistry is the study of organometallic compounds, chemical compounds containing at least one chemical bond between a carbon atom of an organic molecule and a metal, including alkali, alkaline earth, and transition metals, and sometimes broadened to include metalloids like boron, silicon, and selenium, as well. Aside from bonds to organyl fragments or molecules, bonds to 'inorganic' carbon, like carbon monoxide (metal carbonyls), cyanide, or carbide, are generally considered to be organometallic as well. Some related compounds such as transition metal hydrides and metal phosphine complexes are often included in discussions of organometallic compounds, though strictly speaking, they are not necessarily organometallic. The related but distinct term "metalorganic compound" refers to metal-containing compounds lacking direct metal-carbon bonds but which contain organic ligands. Metal β-diketonates, alkoxides, dialkylamides, and metal phosphine complexes are representative members of this class. The field of organometallic chemistry combines aspects of traditional inorganic and organic chemistry. Organometallic compounds are widely used both stoichiometrically in research and industrial chemical reactions, as well as in the role of catalysts to increase the rates of such reactions (e.g., as in uses of homogeneous catalysis), where target molecules include polymers, pharmaceuticals, and many other types of practical products. Organometallic compounds Organometallic compounds are distinguished by the prefix "organo-" (e.g., organopalladium compounds), and include all compounds which contain a bond between a metal atom and a carbon atom of an organyl group. In addition to the traditional metals (alkali metals, alkali earth metals, transition metals, and post transition metals), lanthanides, actinides, semimetals, and the elements boron, silicon, arsenic, and selenium are considered to form organometallic compounds. Examples of organometallic compounds include Gilman reagents, which contain lithium and copper, and Grignard reagents, which contain magnesium. Boron-containing organometallic compounds are often the result of hydroboration and carboboration reactions. Tetracarbonyl nickel and ferrocene are examples of organometallic compounds containing transition metals. Other examples of organometallic compounds include organolithium compounds such as n-butyllithium (n-BuLi), organozinc compounds such as diethylzinc (Et2Zn), organotin compounds such as tributyltin hydride (Bu3SnH), organoborane compounds such as triethylborane (Et3B), and organoaluminium compounds such as trimethylaluminium (Me3Al). A naturally occurring organometallic complex is methylcobalamin (a form of Vitamin B12), which contains a cobalt-methyl bond. This complex, along with other biologically relevant complexes are often discussed within the subfield of bioorganometallic chemistry. Distinction from coordination compounds with organic ligands Many complexes feature coordination bonds between a metal and organic ligands. Complexes where the organic ligands bind the metal through a heteroatom such as oxygen or nitrogen are considered coordination compounds (e.g., heme A and Fe(acac)3). However, if any of the ligands form a direct metal-carbon (M-C) bond, then the complex is considered to be organometallic. Although the IUPAC has not formally defined the term, some chemists use the term "metalorganic" to describe any coordination compound containing an organic ligand regardless of the presence of a direct M-C bond. The status of compounds in which the canonical anion has a negative charge that is shared between (delocalized) a carbon atom and an atom more electronegative than carbon (e.g. enolates) may vary with the nature of the anionic moiety, the metal ion, and possibly the medium. In the absence of direct structural evidence for a carbon–metal bond, such compounds are not considered to be organometallic. For instance, lithium enolates often contain only Li-O bonds and are not organometallic, while zinc enolates (Reformatsky reagents) contain both Zn-O and Zn-C bonds, and are organometallic in nature. Structure and properties The metal-carbon bond in organometallic compounds is generally highly covalent. For highly electropositive elements, such as lithium and sodium, the carbon ligand exhibits carbanionic character, but free carbon-based anions are extremely rare, an example being cyanide. Most organometallic compounds are solids at room temperature, however some are liquids such as methylcyclopentadienyl manganese tricarbonyl, or even volatile liquids such as nickel tetracarbonyl. Many organometallic compounds are air sensitive (reactive towards oxygen and moisture), and thus they must be handled under an inert atmosphere. Some organometallic compounds such as triethylaluminium are pyrophoric and will ignite on contact with air. Concepts and techniques As in other areas of chemistry, electron counting is useful for organizing organometallic chemistry. The 18-electron rule is helpful in predicting the stabilities of organometallic complexes, for example metal carbonyls and metal hydrides. The 18e rule has two representative electron counting models, ionic and neutral (also known as covalent) ligand models, respectively. The hapticity of a metal-ligand complex, can influence the electron count. Hapticity (η, lowercase Greek eta), describes the number of contiguous ligands coordinated to a metal. For example, ferrocene, [(η5-C5H5)2Fe], has two cyclopentadienyl ligands giving a hapticity of 5, where all five carbon atoms of the C5H5 ligand bond equally and contribute one electron to the iron center. Ligands that bind non-contiguous atoms are denoted the Greek letter kappa, κ. Chelating κ2-acetate is an example. The covalent bond classification method identifies three classes of ligands, X,L, and Z; which are based on the electron donating interactions of the ligand. Many organometallic compounds do not follow the 18e rule. The metal atoms in organometallic compounds are frequently described by their d electron count and oxidation state. These concepts can be used to help predict their reactivity and preferred geometry. Chemical bonding and reactivity in organometallic compounds is often discussed from the perspective of the isolobal principle. A wide variety of physical techniques are used to determine the structure, composition, and properties of organometallic compounds. X-ray diffraction is a particularly important technique that can locate the positions of atoms within a solid compound, providing a detailed description of its structure. Other techniques like infrared spectroscopy and nuclear magnetic resonance spectroscopy are also frequently used to obtain information on the structure and bonding of organometallic compounds. Ultraviolet-visible spectroscopy is a common technique used to obtain information on the electronic structure of organometallic compounds. It is also used monitor the progress of organometallic reactions, as well as determine their kinetics. The dynamics of organometallic compounds can be studied using dynamic NMR spectroscopy. Other notable techniques include X-ray absorption spectroscopy, electron paramagnetic resonance spectroscopy, and elemental analysis. Due to their high reactivity towards oxygen and moisture, organometallic compounds often must be handled using air-free techniques. Air-free handling of organometallic compounds typically requires the use of laboratory apparatuses such as a glovebox or Schlenk line. History Early developments in organometallic chemistry include Louis Claude Cadet's synthesis of methyl arsenic compounds related to cacodyl, William Christopher Zeise's platinum-ethylene complex, Edward Frankland's discovery of diethyl- and dimethylzinc, Ludwig Mond's discovery of Ni(CO)4, and Victor Grignard's organomagnesium compounds. (Although not always acknowledged as an organometallic compound, Prussian blue, a mixed-valence iron-cyanide complex, was first prepared in 1706 by paint maker Johann Jacob Diesbach as the first coordination polymer and synthetic material containing a metal-carbon bond.) The abundant and diverse products from coal and petroleum led to Ziegler–Natta, Fischer–Tropsch, hydroformylation catalysis which employ CO, H2, and alkenes as feedstocks and ligands. Recognition of organometallic chemistry as a distinct subfield culminated in the Nobel Prizes to Ernst Fischer and Geoffrey Wilkinson for work on metallocenes. In 2005, Yves Chauvin, Robert H. Grubbs and Richard R. Schrock shared the Nobel Prize for metal-catalyzed olefin metathesis. Organometallic chemistry timeline 1760 Louis Claude Cadet de Gassicourt isolates the organoarenic compound cacodyl 1827 William Christopher Zeise produces Zeise's salt; the first platinum / olefin complex 1848 Edward Frankland discovers diethylzinc 1890 Ludwig Mond discovers nickel carbonyl 1899 John Ulric Nef discovers alkynylation using sodium acetylides. 1909 Paul Ehrlich introduces Salvarsan for the treatment of syphilis, an early arsenic based organometallic compound 1912 Nobel Prize Victor Grignard and Paul Sabatier 1930 Henry Gilman invents lithium cuprates, see Gilman reagent 1940 Eugene G. Rochow and Richard Müller discover the direct process for preparing organosilicon compounds 1930's and 1940's Otto Roelen and Walter Reppe develop metal-catalyzed hydroformylation and acetylene chemistry 1951 Walter Hieber was awarded the Alfred Stock prize for his work with metal carbonyl chemistry. 1951 Ferrocene is discovered 1956 Dorothy Crawfoot Hodgkin determines the structure of vitamin B12, the first biomolecule found to contain a metal-carbon bond, see bioorganometallic chemistry 1963 Nobel prize for Karl Ziegler and Giulio Natta on Ziegler–Natta catalyst 1973 Nobel prize Geoffrey Wilkinson and Ernst Otto Fischer on sandwich compounds 1981 Nobel prize Roald Hoffmann and Kenichi Fukui for creation of the Woodward-Hoffman Rules 2001 Nobel prize W. S. Knowles, R. Noyori and Karl Barry Sharpless for asymmetric hydrogenation 2005 Nobel prize Yves Chauvin, Robert Grubbs, and Richard Schrock on metal-catalyzed alkene metathesis 2010 Nobel prize Richard F. Heck, Ei-ichi Negishi, Akira Suzuki for palladium catalyzed cross coupling reactions Scope Subspecialty areas of organometallic chemistry include: Period 2 elements: organolithium chemistry, organoberyllium chemistry, organoborane chemistry Period 3 elements: organosodium chemistry, organomagnesium chemistry, organoaluminium chemistry, organosilicon chemistry Period 4 elements: organocalcium chemistry, organoscandium chemistry, organotitanium chemistry, organovanadium chemistry, organochromium chemistry, organomanganese chemistry, organoiron chemistry, organocobalt chemistry, organonickel chemistry, organocopper chemistry, organozinc chemistry, organogallium chemistry, organogermanium chemistry, organoarsenic chemistry, organoselenium chemistry Period 5 elements: organoyttrium chemistry, organozirconium chemistry, organoniobium chemistry, organomolybdenum chemistry, organotechnetium chemistry, organoruthenium chemistry, organorhodium chemistry, organopalladium chemistry, organosilver chemistry, organocadmium chemistry, organoindium chemistry, organotin chemistry, organoantimony chemistry, organotellurium chemistry Period 6 elements: organolanthanide chemistry, organocerium chemistry, organotantalum chemistry, organotungsten chemistry, organorhenium chemistry, organoosmium chemistry, organoiridium chemistry, organoplatinum chemistry, organogold chemistry, organomercury chemistry, organothallium chemistry, organolead chemistry, organobismuth chemistry, organopolonium chemistry Period 7 elements: organoactinide chemistry, organothorium chemistry, organouranium chemistry, organoneptunium chemistry Industrial applications Organometallic compounds find wide use in commercial reactions, both as homogenous catalysts and as stoichiometric reagents. For instance, organolithium, organomagnesium, and organoaluminium compounds, examples of which are highly basic and highly reducing, are useful stoichiometrically but also catalyze many polymerization reactions. Almost all processes involving carbon monoxide rely on catalysts, notable examples being described as carbonylations. The production of acetic acid from methanol and carbon monoxide is catalyzed via metal carbonyl complexes in the Monsanto process and Cativa process. Most synthetic aldehydes are produced via hydroformylation. The bulk of the synthetic alcohols, at least those larger than ethanol, are produced by hydrogenation of hydroformylation-derived aldehydes. Similarly, the Wacker process is used in the oxidation of ethylene to acetaldehyde. Almost all industrial processes involving alkene-derived polymers rely on organometallic catalysts. The world's polyethylene and polypropylene are produced via both heterogeneously via Ziegler–Natta catalysis and homogeneously, e.g., via constrained geometry catalysts. Most processes involving hydrogen rely on metal-based catalysts. Whereas bulk hydrogenations (e.g., margarine production) rely on heterogeneous catalysts, for the production of fine chemicals such hydrogenations rely on soluble (homogenous) organometallic complexes or involve organometallic intermediates. Organometallic complexes allow these hydrogenations to be effected asymmetrically. Many semiconductors are produced from trimethylgallium, trimethylindium, trimethylaluminium, and trimethylantimony. These volatile compounds are decomposed along with ammonia, arsine, phosphine and related hydrides on a heated substrate via metalorganic vapor phase epitaxy (MOVPE) process in the production of light-emitting diodes (LEDs). Organometallic reactions Organometallic compounds undergo several important reactions: associative and dissociative substitution oxidative addition and reductive elimination transmetalation migratory insertion β-hydride elimination electron transfer carbon-hydrogen bond activation carbometalation hydrometalation cyclometalation nucleophilic abstraction The synthesis of many organic molecules are facilitated by organometallic complexes. Sigma-bond metathesis is a synthetic method for forming new carbon-carbon sigma bonds. Sigma-bond metathesis is typically used with early transition-metal complexes that are in their highest oxidation state. Using transition-metals that are in their highest oxidation state prevents other reactions from occurring, such as oxidative addition. In addition to sigma-bond metathesis, olefin metathesis is used to synthesize various carbon-carbon pi bonds. Neither sigma-bond metathesis or olefin metathesis change the oxidation state of the metal. Many other methods are used to form new carbon-carbon bonds, including beta-hydride elimination and insertion reactions. Catalysis Organometallic complexes are commonly used in catalysis. Major industrial processes include hydrogenation, hydrosilylation, hydrocyanation, olefin metathesis, alkene polymerization, alkene oligomerization, hydrocarboxylation, methanol carbonylation, and hydroformylation. Organometallic intermediates are also invoked in many heterogeneous catalysis processes, analogous to those listed above. Additionally, organometallic intermediates are assumed for Fischer–Tropsch process. Organometallic complexes are commonly used in small-scale fine chemical synthesis as well, especially in cross-coupling reactions that form carbon-carbon bonds, e.g. Suzuki-Miyaura coupling, Buchwald-Hartwig amination for producing aryl amines from aryl halides, and Sonogashira coupling, etc. Environmental concerns Natural and contaminant organometallic compounds are found in the environment. Some that are remnants of human use, such as organolead and organomercury compounds, are toxicity hazards. Tetraethyllead was prepared for use as a gasoline additive but has fallen into disuse because of lead's toxicity. Its replacements are other organometallic compounds, such as ferrocene and methylcyclopentadienyl manganese tricarbonyl (MMT). The organoarsenic compound roxarsone is a controversial animal feed additive. In 2006, approximately one million kilograms of it were produced in the U.S alone. Organotin compounds were once widely used in anti-fouling paints but have since been banned due to environmental concerns.
Physical sciences
Chemistry: General
null
22544
https://en.wikipedia.org/wiki/Common%20ostrich
Common ostrich
The common ostrich (Struthio camelus), or simply ostrich, is a species of flightless bird native to certain areas of Africa. It is one of two extant species of ostriches, the only living members of the genus Struthio in the ratite order of birds. The other is the Somali ostrich (Struthio molybdophanes), which has been recognized as a distinct species by BirdLife International since 2014, having been previously considered a distinctive subspecies of ostrich. The common ostrich belongs to the order Struthioniformes. Struthioniformes previously contained all the ratites, such as the kiwis, emus, rheas, and cassowaries. However, recent genetic analysis has found that the group is not monophyletic, as it is paraphyletic with respect to the tinamous, so the ostriches are now classified as the only members of the order. Phylogenetic studies have shown that it is the sister group to all other members of Palaeognathae, and thus the flighted tinamous are the sister group to the extinct moa. It is distinctive in its appearance, with a long neck and legs, and can run for a long time at a speed of with short bursts up to about , the fastest land speed of any bipedal animal and the second fastest of all land animals after the Cheetah. The common ostrich is the largest living species of bird and thus the largest living dinosaur. It lays the largest eggs of any living bird (the extinct giant elephant bird (Aepyornis maximus) of Madagascar and the south island giant moa (Dinornis robustus) of New Zealand laid larger eggs). Ostriches are the most dangerous birds on the planet for humans, with an average of two to three deaths being recorded each year in South Africa. The common ostrich's diet consists mainly of plant matter, though it also eats invertebrates and small reptiles. It lives in nomadic groups of 5 to 50 birds. When threatened, the ostrich will either hide itself by lying flat against the ground or run away. If cornered, it can attack with a kick of its powerful legs. Mating patterns differ by geographical region, but territorial males fight for a harem of two to seven females. The common ostrich is farmed around the world, particularly for its feathers, which are decorative and are also used as feather dusters. Its skin is used for leather products and its meat is marketed commercially, with its leanness a common marketing point. Description The common ostrich is the largest and heaviest living bird. Males stand tall and weigh , whereas females are about tall and weigh . While exceptional male ostriches (in the nominate subspecies) can weigh up to , some specimens in South Africa can only weigh between . New chicks are fawn in color, with dark brown spots. After three months they start to gain their juvenile plumage, which is steadily replaced by adult-like plumage during their second year. At four or five months old, they are already about half the size of an adult bird, and after a year they reach adult height, but not till they are 18 months old will they be fully as heavy as their parents. The feathers of adult males are mostly black, with white primaries and a white tail. However, the tail of one subspecies is buff. Females and young males are grayish-brown and white. The head and neck of both male and female ostriches are nearly bare, with a thin layer of down. The skin of the female's neck and thighs is pinkish gray, while the male's is gray or pink dependent on subspecies. The long neck and legs keep their head up to above the ground, and their eyes are said to be the largest of any land vertebrate in diameter helping them to see predators at a great distance. The eyes are shaded from sunlight from above. However, the head and bill are relatively small for the birds' huge size, with the bill measuring . Their skin varies in color depending on the subspecies, with some having light or dark gray skin and others having pinkish or even reddish skin. The strong legs of the common ostrich are unfeathered and show bare skin, with the tarsus (the lowest upright part of the leg) being covered in scales: red in the male, black in the female. The tarsus of the common ostrich is the largest of any living bird, measuring in length. The bird is didactyl, having just two toes on each foot (most birds have four), with the nail on the larger, inner toe resembling a hoof. The outer toe has no nail. The reduced number of toes is an adaptation that appears to aid in running, useful for getting away from predators. Common ostriches can run at a speed over and can cover in a single stride. The wings reach a span of about , and the wing chord measurement of is around the same size as for the largest flying birds. The feathers lack the tiny hooks that lock together the smooth external feathers of flying birds, and so are soft and fluffy and serve as insulation. Common ostriches can tolerate a wide range of temperatures. In much of their habitat, temperatures vary as much as between night and day. Their temperature control relies in part on behavioral thermoregulation. For example, they use their wings to cover the naked skin of the upper legs and flanks to conserve heat, or leave these areas bare to release heat. The wings also function as stabilizers to give better maneuverability when running. Tests have shown that the wings are actively involved in rapid braking, turning, and zigzag maneuvers. They have 50–60 tail feathers, and their wings have 16 primary, four alular, and 20–23 secondary feathers. The common ostrich's sternum is flat, lacking the keel to which wing muscles attach in flying birds. The beak is flat and broad, with a rounded tip. Like all ratites, the ostrich has no crop, and it also lacks a gallbladder and the caecum is . Unlike all other living birds, the common ostrich secretes urine separately from feces. All other birds store the urine and feces combined in the coprodeum, but the ostrich stores the feces in the terminal rectum. They also have unique pubic bones that are fused to hold their gut. Unlike most birds, the males have a copulatory organ, which is retractable and long. Their palate differs from other ratites in that the sphenoid and palatal bones are unconnected. Taxonomy The common ostrich was originally described by Carl Linnaeus from Sweden in his 18th-century work, Systema Naturae under its current binomial name. Its genus is derived from the Late Latin struthio meaning "ostrich". The specific name is an allusion to "strouthokamelos" the Ancient Greek name for the ostrich, meaning camel-sparrow, the "camel" term referring to its dry habitat. Στρουθοκάμηλος is still the modern Greek name for the ostrich. The common ostrich belongs to the Infraclass Palaeognathae commonly known as ratites. Other members include rheas, emus, cassowaries, moa, kiwi, elephant birds, tinamous. Subspecies Four subspecies are recognized: Some analyses indicate that the Somali ostrich is now considered a full species; the Tree of Life Project, The Clements Checklist of Birds of the World, BirdLife International, and the IOC World Bird List recognize it as a different species. A few authorities, including the Howard and Moore Complete Checklist of the Birds of the World, do not recognize it as separate. Mitochondrial DNA haplotype comparisons suggest that it diverged from the other ostriches not quite 4 mya due to formation of the East African Rift. Hybridization with the subspecies that evolved southwestwards of its range, S. c. massaicus, has apparently been prevented from occurring on a significant scale by ecological separation; the Somali ostrich prefers bushland where it browses middle-height vegetation for food while the Masai ostrich is, like the other subspecies, a grazing bird of the open savanna and miombo habitat. The population from Río de Oro was once separated as Struthio camelus spatzi because its eggshell pores were shaped like a teardrop and not round. As there is considerable variation of this character and there were no other differences between these birds and adjacent populations of S. c. camelus, the separation is no longer considered valid. However, a study analysing the postcranial skeleton of all living and recently extinct species and subspecies of ostriches appeared to validate S. c. spatzi based on its unique skeletal proportions. This population disappeared in the latter half of the 20th century. There were 19th-century reports of the existence of small ostriches in North Africa; these are referred to as Levaillant's ostrich (Struthio bidactylus) but remain a hypothetical form not supported by material evidence. Distribution and habitat Common ostriches formerly occupied Africa north and south of the Sahara, East Africa, Africa south of the rainforest belt, and much of Asia Minor. Today common ostriches prefer open land and are native to the savannas and Sahel of Africa, both north and south of the equatorial forest zone. In southwest Africa they inhabit the semi-desert or true desert. Farmed common ostriches in Australia have established feral populations. The Arabian ostriches in the Near and Middle East were hunted to extinction by the middle of the 20th century. Attempts to reintroduce the common ostrich into Israel have failed. Common ostriches have occasionally been seen inhabiting islands on the Dahlak Archipelago, in the Red Sea near Eritrea. Research conducted by the Birbal Sahni Institute of Palaeobotany in India found molecular evidence that ostriches lived in India 25,000 years ago. DNA tests on fossilized eggshells recovered from eight archaeological sites in the states of Rajasthan, Gujarat and Madhya Pradesh found 92% genetic similarity between the eggshells and the North African ostrich, so these could have been fairly distant relatives. Ostriches are farmed in Australia. Many escaped, however, and feral ostriches now roam the Australian outback. Behaviour and ecology Common ostriches normally spend the winter months in pairs or alone. Only 16 percent of common ostrich sightings were of more than two birds. During breeding season and sometimes during extreme rainless periods ostriches live in nomadic groups of five to 100 birds (led by a top hen) that often travel together with other grazing animals, such as zebras or antelopes. Ostriches are diurnal, but may be active on moonlit nights. They are most active early and late in the day. The male common ostrich territory is between . With their acute eyesight and hearing, common ostriches can sense predators such as lions from far away. When being pursued by a predator, they have been known to reach speeds in excess of , or possibly and can maintain a steady speed of , which makes the common ostrich the world's fastest two-legged animal. When lying down and hiding from predators, the birds lay their heads and necks flat on the ground, making them appear like a mound of earth from a distance, aided by the heat haze in their hot, dry habitat. When threatened, common ostriches run away, but they can cause serious injury and death with kicks from their powerful legs. Their legs can only kick forward. The kick from an ostrich can yield . Feeding They mainly feed on seeds, shrubs, grass, fruit, and flowers; occasionally they also eat insects such as locusts, small reptiles such as lizards, and animal remains left by carnivorous predators. Lacking teeth, they swallow pebbles that act as gastroliths to grind food in the gizzard. When eating, they will fill their gullet with food, which is in turn passed down their esophagus in the form of a ball called a bolus. The bolus may be as much as . After passing through the neck (there is no crop) the food enters the gizzard and is worked on by the aforementioned pebbles. The gizzard can hold as much as , of which up to 45% may be sand and pebbles. Common ostriches can go without drinking for several days, using metabolic water and moisture in ingested plants, but they enjoy liquid water and frequently take baths where it is available. They can survive losing up to 25% of their body weight through dehydration. Mating Common ostriches become sexually mature when they are 2 to 4 years old; females mature about six months earlier than males. As with other birds, an individual may reproduce several times over its lifetime. The mating season begins in March or April and ends sometime before September. The mating process differs in different geographical regions. Territorial males typically boom (by inflating their neck) in defense of their territory and harem of two to seven hens; the successful male may then mate with several females in the area, but will only form a pair bond with a 'major' female. The cock performs with his wings, alternating wing beats, until he attracts a mate. They will go to the mating area and he will maintain privacy by driving away all intruders. They graze until their behavior is synchronized, then the feeding becomes secondary and the process takes on a ritualistic appearance. The cock will then excitedly flap alternate wings again and start poking on the ground with his bill. He will then violently flap his wings to symbolically clear out a nest in the soil. Then, while the hen runs a circle around him with lowered wings, he will wind his head in a spiral motion. She will drop to the ground and he will mount for copulation. Common ostriches raised entirely by humans may direct their courtship behavior not at other ostriches, but toward their human keepers. The female common ostrich lays her fertilized eggs in a single communal nest, a simple pit, deep and wide, scraped in the ground by the male. The dominant female lays her eggs first; when it is time to cover them for incubation, she discards extra eggs from the weaker females, leaving about 20 in most cases. A female common ostrich can distinguish her own eggs from the others in a communal nest. Ostrich eggs are the largest of all eggs, though they are actually the smallest eggs relative to the size of the adult bird – on average they are long, wide, and weigh , over 20 times the weight of a chicken's egg and only 1 to 4% the size of the female. They are glossy cream-colored, with thick shells marked by small pits. The eggs are incubated by the females by day and by the males by night. This uses the coloration of the two sexes to escape detection of the nest. The drab female blends in with the sand, while the black male is nearly undetectable in the night. The incubation period is 35 to 45 days, which is rather short compared to other ratites. This is believed to be the case due to the high rate of predation. Typically, the male defends the hatchlings and teaches them to feed, although males and females cooperate in rearing chicks. Fewer than 10% of nests survive the 9-week period of laying and incubation, and of the surviving chicks, only 15% of those survive to 1 year of age. However, among those common ostriches who survive to adulthood, the species is one of the longest-living bird species. Common ostriches in captivity have lived to 62 years and 7 months. Predators As a flightless species in the rich biozone of the African savanna, the common ostrich faces a variety of formidable predators throughout its life cycle. Animals that prey on ostriches of all ages may include cheetahs, lions, leopards, African hunting dogs, and spotted hyenas. Predators of nests and young common ostriches include jackals, various birds of prey, warthogs, mongoose, and Egyptian vultures. Egyptian vultures have been known to hurl stones at ostrich eggs to crack them open so they can eat their contents. Due to predation pressure, common ostriches have many antipredator tactics. Though they can deliver formidable kicks, they use their great eyesight and speed to run from most of their predators. Since ostriches that have detected predators are almost impossible to catch, most predators will try to ambush an unsuspecting bird using obstructing vegetation or other objects. Some ostriches forage with other ostriches or mammals such as wildebeests and zebras to detect predators more efficiently. If the nest or young are threatened, either or both of the parents may create a distraction, feigning injury. However, they may sometimes fiercely fight predators, especially when chicks are being defended, and are capable of killing humans, hyenas, and even lions in such confrontations. In non-native areas, especially on Ostrich farms in North America, adult ostriches have no known enemies due to their large size, intimidating presence and behaviour similar to that of overgrown guard dogs; with instances of them attacking and decapitating coyotes on one occasion. Usually, ostrich hunting is done by male cheetah coalitions in the Kalahari region during the night, when ostrich's vigilance is less effective. Cheetahs in other regions rarely hunt ostriches, but an exceptional coalition composed of three East African cheetahs has been reported in Kenya. Similarly, lions hunt ostriches mainly in the Kalahari region and not in other regions, or take ostriches as only a small percentage of their prey. Overall, due to their speed, vigilance, and possibly dangerous kick, ostriches are usually avoided by most predators, including lions, leopards, wild dogs, and cheetahs. Despite parental care, 90% is typical for chick mortality, most of it caused by predation. Physiology Respiration Anatomy Morphology of the common ostrich lung indicates that the structure conforms to that of the other avian species, but still retains parts of its primitive avian species, ratite, structure. The opening to the respiratory pathway begins with the laryngeal cavity lying posterior to the choanae within the buccal cavity. The tip of the tongue then lies anterior to the choanae, excluding the nasal respiratory pathway from the buccal cavity. The trachea lies ventrally to the cervical vertebrae extending from the larynx to the syrinx, where the trachea enters the thorax, dividing into two primary bronchi, one to each lung, in which they continue directly through to become mesobronchi. Ten different air sacs attach to the lungs to form areas for respiration. The most posterior air sacs (abdominal and post-thoracic) differ in that the right abdominal air sac is relatively small, lying to the right of the mesentery, and dorsally to the liver. While the left abdominal air sac is large and lies to the left of the mesentery. The connection from the main mesobronchi to the more anterior air sacs including the interclavicular, lateral clavicular, and pre-thoracic sacs known as the ventrobronchi region. While the caudal end of the mesobronchus branches into several dorsobronchi. Together, the ventrobronchi and dorsobronchi are connected by intra-pulmonary airways, the parabronchi, which form an arcade structure within the lung called the paleopulmo. It is the only structure found in primitive birds such as ratites. The largest air sacs found within the respiratory system are those of the post-thoracic region, while the others decrease in size respectively, the interclavicular (unpaired), abdominal, pre-thoracic, and lateral clavicular sacs. The adult common ostrich lung lacks connective tissue known as interparabronchial septa, which render strength to the non-compliant avian lung in other bird species. Due to this the lack of connective tissue surrounding the parabronchi and adjacent parabronchial lumen, they exchange blood capillaries or avascular epithelial plates. Like mammals, ostrich lungs contain an abundance of type II cells at gas exchange sites; an adaptation for preventing lung collapse during slight volume changes. Function The common ostrich is an endotherm and maintains a body temperature of in its extreme living temperature conditions, such as the heat of the savanna and desert regions of Africa. The ostrich utilizes its respiratory system via a costal pump for ventilation rather than a diaphragmatic pump as seen in most mammals. Thus, they are able to use a series of air sacs connected to the lungs. The use of air sacs forms the basis for the three main avian respiratory characteristics: Air is able to flow continuously in one direction through the lung, making it more efficient than the mammalian lung. It provides birds with a large residual volume, allowing them to breathe much more slowly and deeply than a mammal of the same body mass. It provides a large source of air that is used not only for gaseous exchange, but also for the transfer of heat by evaporation. Inhalation begins at the mouth and the nostrils located at the front of the beak. The air then flows through the anatomical dead space of a highly vascular trachea ( ) and expansive bronchial system, where it is further conducted to the posterior air sacs. Air flow through the parabronchi of the paleopulmo is in the same direction to the dorsobronchi during inspiration and expiration. Inspired air moves into the respiratory system as a result of the expansion of thoraco abdominal cavity; controlled by inspiratory muscles. During expiration, oxygen poor air flows to the anterior air sacs and is expelled by the action of the expiratory muscles. The common ostrich air sacs play a key role in respiration, since they are capacious, and increase surface area (as described by the Fick Principle). The oxygen rich air flows unidirectionally across the respiratory surface of the lungs; providing the blood that has a crosscurrent flow with a high concentration of oxygen. To compensate for the large "dead" space, the common ostrich trachea lacks valves to allow faster inspiratory air flow. In addition, the total lung capacity of the respiratory system, (including the lungs and ten air sacs) of a ostrich is about , with a tidal volume ranging from . The tidal volume is seen to double resulting in a 16-fold increase in ventilation. Overall, ostrich respiration can be thought of as a high velocity-low pressure system. At rest, there is a small pressure difference between the ostrich air sacs and the atmosphere, suggesting simultaneous filling and emptying of the air sacs. The increase in respiration rate from the low range to the high range is sudden and occurs in response to hyperthermia. Birds lack sweat glands, so when placed under stress due to heat, they heavily rely upon increased evaporation from the respiratory system for heat transfer. This rise in respiration rate however is not necessarily associated with a greater rate of oxygen consumption. Therefore, unlike most other birds, the common ostrich is able to dissipate heat through panting without experiencing respiratory alkalosis by modifying ventilation of the respiratory medium. During hyperpnea ostriches pant at a respiratory rate of 40–60 cycles per minute, versus their resting rate of 6–12 cycles per minute. Hot, dry, and moisture lacking properties of the common ostrich respiratory medium affect oxygen's diffusion rate (Henry's Law). Common ostriches develop via Intussusceptive angiogenesis, a mechanism of blood vessel formation, characterizing many organs. It is not only involved in vasculature expansion, but also in angioadaptation of vessels to meet physiological requirements. The use of such mechanisms demonstrates an increase in the later stages of lung development, along with elaborate parabronchial vasculature, and reorientation of the gas exchange blood capillaries to establish the crosscurrent system at the blood-gas barrier. The blood–gas barrier (BGB) of their lung tissue is thick. The advantage of this thick barrier may be protection from damage by large volumes of blood flow in times of activity, such as running, since air is pumped by the air sacs rather than the lung itself. As a result, the capillaries in the parabronchi have thinner walls, permitting more efficient gaseous exchange. In combination with separate pulmonary and systemic circulatory systems, it helps to reduce stress on the BGB. Circulation Heart anatomy The common ostrich heart is a closed system, contractile chamber. It is composed of myogenic muscular tissue associated with heart contraction features. There is a double circulatory plan in place possessing both a pulmonary circuit and systemic circuit. The common ostrich's heart has similar features to other avian species, like having a conically shaped heart and being enclosed by a pericardium layer. Moreover, similarities also include a larger right atrium volume and a thicker left ventricle to fulfil the systemic circuit. The ostrich heart has three features that are absent in related birds: The right atrioventricular valve is fixed to the interventricular septum, by a thick muscular stock, which prevents back-flow of blood into the atrium when ventricular systole is occurring. In the fowl this valve is only connected by a short septal attachment. Pulmonary veins attach to the left atrium separately, and also the opening to the pulmonary veins are separated by a septum. Moderator bands, full of Purkinje fibers, are found in different locations in the left and right ventricles. These bands are associated with contractions of the heart and suggests this difference causes the left ventricle to contract harder to create more pressure for a completed circulation of blood around the body. The atrioventricular node position differs from other fowl. It is located in the endocardium of the atrial surface of the right atrioventricular valve. It is not covered by connective tissue, which is characteristic of vertebrate heart anatomy. It also contains fewer myofibrils than usual myocardial cells. The AV node connects the atrial and ventricular chambers. It functions to carry the electrical impulse from the atria to the ventricle. Upon view, the myocardial cells are observed to have large densely packed chromosomes within the nucleus. The coronary arteries start in the right and left aortic sinus and provide blood to the heart muscle in a similar fashion to most other vertebrates. Other domestic birds capable of flight have three or more coronary arteries that supply blood to the heart muscle. The blood supply by the coronary arteries are fashioned starting as a large branch over the surface of the heart. It then moves along the coronary groove and continues on into the tissue as interventricular branches toward the apex of the heart. The atria, ventricles, and septum are supplied of blood by this modality. The deep branches of the coronary arteries found within the heart tissue are small and supply the interventricular and right atrioventricular valve with blood nutrients for which to carry out their processes. The interatrial artery of the ostrich is small in size and exclusively supplies blood to only part of the left auricle and interatrial septum. These Purkinje fibers (p-fibers) found in the hearts moderator bands are a specialized cardiac muscle fiber that causes the heart to contract. The Purkinje cells are mostly found within both the endocardium and the sub-endocardium. The sinoatrial node shows a small concentration of Purkinje fibers, however, continuing through the conducting pathway of the heart the bundle of his shows the highest amount of these Purkinje fibers. Blood composition The red blood cell count per unit volume in the ostrich is about 40% of that of a human; however, the red blood cells of the ostrich are about three times larger than the red blood cells of a human. The blood oxygen affinity, known as P50, is higher than that of both humans and similar avian species. The reason for this decreased oxygen affinity is due to the hemoglobin configuration found in common ostrich blood. The common ostrich's tetramer is composed of hemoglobin type A and D, compared to typical mammalian tetramers composed of hemoglobin type A and B; hemoglobin D configuration causes a decreased oxygen affinity at the site of the respiratory surface. During the embryonic stage, Hemoglobin E is present. This subtype increases oxygen affinity in order to transport oxygen across the allantoic membrane of the embryo. This can be attributed to the high metabolic need of the developing embryo, thus high oxygen affinity serves to satisfy this demand. When the chick hatches hemoglobin E diminishes while hemoglobin A and D increase in concentration. This shift in hemoglobin concentration results in both decreased oxygen affinity and increased P50 value. Furthermore, the P50 value is influenced by differing organic modulators. In the typical mammalian RBC 2,3 – DPG causes a lower affinity for oxygen. 2,3- DPG constitutes approximately 42–47%, of the cells phosphate of the embryonic ostrich. However, the adult ostrich have no traceable 2,3- DPG.In place of 2,3-DPG the ostrich uses inositol polyphosphates (IPP), which vary from 1–6 phosphates per molecule. In relation to the IPP, the ostrich also uses ATP to lower oxygen affinity. ATP has a consistent concentration of phosphate in the cell around 31% at incubation periods and dropping to 16–20% in 36-day-old chicks. However, IPP has low concentrations, around 4%, of total phosphate concentration in embryonic stages, but the IPP concentration jumps to 60% of total phosphate of the cell. The majority of phosphate concentration switches from 2,3- DPG to IPP, suggesting the result of the overall low oxygen affinity is due to these varying polyphosphates. Concerning immunological adaptation, it was discovered that wild common ostriches have a pronounced non-specific immunity defense, with blood content reflecting high values of lysosome and phagocyte cells in medium. This is in contrast to domesticated ostriches, who in captivity develop high concentration of immunoglobulin antibodies in their circulation, indicating an acquired immunological response. It is suggested that this immunological adaptability may allow this species to have a high success rate of survival in variable environmental settings. Osmoregulation Physiological challenges The common ostrich is a xeric animal, due to the fact that it lives in habitats that are both dry and hot. Water is scarce in dry and hot environments, and this poses a challenge to the ostrich's water consumption. Also the ostrich is a ground bird and cannot fly to find water sources, which poses a further challenge. Because of their size, common ostriches cannot easily escape the heat of their environment; however, they dehydrate less than their small bird counterparts because of their small surface area to volume ratio. Hot, arid habitats pose osmotic stress, such as dehydration, which triggers the common ostrich's homeostatic response to osmoregulate. System overview The common ostrich is well-adapted to hot, arid environments through specialization of excretory organs. The common ostrich has an extremely long and developed colon a length of approximately between the coprodeum and the paired caeca, which are around long. A well-developed caeca is also found and, in combination with the rectum, forms the microbial fermentation chambers used for carbohydrate breakdown. The catabolism of carbohydrates produces around of water that can be used internally. The majority of their urine is stored in the coprodeum, and the feces are separately stored in the terminal colon. The coprodeum is located ventral to the terminal rectum and urodeum (where the ureters open). Found between the terminal rectum and coprodeum is a strong sphincter. The coprodeum and cloaca are the main osmoregulatory mechanisms used for the regulation and reabsorption of ions and water, or net water conservation. As expected in a species inhabiting arid regions, dehydration causes a reduction in fecal water, or dry feces. This reduction is believed to be caused by high levels of plasma aldosterone, which leads to rectal absorption of sodium and water. Also expected is the production of hyperosmotic urine; cloacal urine has been found to be 800 mOsm. The U:P (urine:plasma) ratio of the common ostrich is therefore greater than one. Diffusion of water to the coprodeum (where urine is stored) from plasma across the epithelium is voided. This void is believed to be caused by the thick mucosal layering of the coprodeum. Common ostriches have two kidneys, which are chocolate brown in color, are granular in texture, and lie in a depression in the pelvic cavity of the dorsal wall. They are covered by peritoneum and a layer of fat. Each kidney is about long, wide, and divided into a cranial, middle, and caudal sections by large veins. The caudal section is the largest, extending into the middle of the pelvis. The ureters leave the ventral caudomedial surface and continue caudally, near the midline into the opening of the urodeum of the cloaca. Although there is no bladder, a dilated pouch of ureter stores the urine until it is secreted continuously down from the ureters to the urodeum until discharged. Kidney function Common ostrich kidneys are fairly large and so are able to hold significant amounts of solutes. Hence, common ostriches drink relatively large volumes of water daily and excrete generous quantities of highly concentrated urine. It is when drinking water is unavailable or withdrawn that the urine becomes highly concentrated with uric acid and urates. It seems that common ostriches who normally drink relatively large amounts of water tend to rely on renal conservation of water within the kidney system when drinking water is scarce. Though there have been no official detailed renal studies conducted on the flow rate (Poiseuille's Law) and composition of the ureteral urine in the ostrich, knowledge of renal function has been based on samples of cloacal urine, and samples or quantitative collections of voided urine. Studies have shown that the amount of water intake and dehydration impacts the plasma osmolality and urine osmolality within various sized ostriches. During a normal hydration state of the kidneys, young ostriches tend to have a measured plasma osmolality of 284 mOsm and urine osmolality of 62 mOsm. Adults have higher rates with a plasma osmolality of 330 mOsm and urine osmolality of 163 mOsm. The osmolality of both plasma and urine can alter in regards to whether there is an excess or depleted amount of water present within the kidneys. An interesting fact of common ostriches is that when water is freely available, the urine osmolality can reduce to 60–70 mOsm, not losing any necessary solutes from the kidneys when excess water is excreted. Dehydrated or salt-loaded ostriches can reach a maximal urine osmolality of approximately 800 mOsm. When the plasma osmolality has been measured simultaneously with the maximal osmotic urine, it is seen that the urine:plasma ratio is 2.6:1, the highest encountered among avian species. Along with dehydration, there is also a reduction in flow rate from 20 L·d−1 to only 0.3–0.5 L·d−1. In mammals and common ostriches, the increase of the glomerular filtration rate (GFR) and urine flow rate (UFR) is due to a high protein diets. As seen in various studies, scientists have measured clearance of creatinine, a fairly reliable marker of glomerular filtration rate (GFR). It has been seen that during normal hydration within the kidneys, the glomerular filtration rate is approximately 92 ml/min. However, when an ostrich experiences dehydration for at least 48 hours (2 days), this value diminishes to only 25% of the hydrated GFR rate. Thus in response to the dehydration, ostrich kidneys secrete small amounts of very viscous glomerular filtrates that have not been broken down and return them to the circulatory system through blood vessels. The reduction of GFR during dehydration is extremely high and so the fractional excretion of water (urine flow rate as a percentage of GFR) drops down from 15% at normal hydration to 1% during dehydration. Water intake and turnover Common ostriches employ adaptive features to manage the dry heat and solar radiation in their habitat. Ostriches will drink available water; however, they are limited in accessing water by being flightless. They are also able to harvest water through dietary means, consuming plants such as the Euphorbia heterochroma that hold up to 87% water. Water mass accounts for 68% of body mass in adult common ostriches; this is down from 84% water mass in 35-day-old chicks. The differing degrees of water retention are thought to be a result of varying body fat mass. In comparison to smaller birds ostriches have a lower evaporative water loss resulting from their small body surface area per unit weight. When heat stress is at its maximum, common ostriches are able to recover evaporative loss by using a metabolic water mechanism to counter the loss by urine, feces, and respiratory evaporation. An experiment to determine the primary source of water intake in the ostrich indicated that while the ostrich does employ a metabolic water production mechanism as a source of hydration, the most important source of water is food. When ostriches were restricted to the no food or water condition, the metabolic water production was only 0.5 L·d−1, while total water lost to urine, feces, and evaporation was 2.3 L·d−1. When the birds were given both water and food, total water gain was 8.5 L·d−1. In the food only condition total water gain was 10.1 L·d−1. These results show that the metabolic water mechanism is not able to sustain water loss independently and that food intake, specifically of plants with a high water content such as Euphorbia heterochroma, is necessary to overcome water loss challenges in the common ostrich's arid habitat. In times of water deprivation, urine electrolyte and osmotic concentration increases while urination rate decreases. Under these conditions urine solute:plasma ratio is approximately 2.5, or hyperosmotic; that is to say that the ratio of solutes to water in the plasma is shifted down whereby reducing osmotic pressure in the plasma. Water is then able to be held back from excretion, keeping the ostrich hydrated, while the passed urine contains higher concentrations of solute. This mechanism exemplifies how renal function facilitates water retention during periods of dehydration stress. Nasal glands A number of avian species use nasal salt glands, alongside their kidneys, to control hypertonicity in their blood plasma. However, the common ostrich shows no nasal glandular function in regard to this homeostatic process. Even in a state of dehydration, which increases the osmolality of the blood, nasal salt glands show no sizeable contribution of salt elimination. Also, the overall mass of the glands was less than that of the duck's nasal gland. The common ostrich, having a heavier body weight, should have larger, heavier nasal glands to more effectively excrete salt from a larger volume of blood, but this is not the case. These unequal proportions contribute to the assumption that the common ostrich's nasal glands do not play any role in salt excretion. Biochemistry The majority of the common ostrich's internal solutes are made up of sodium ions (), potassium ions (), chloride ions (), total short-chain fatty acids (SCFA), and acetate. The caecum contains a high water concentration with reduced levels nearing the terminal colon and exhibits a rapid fall in concentrations and small changes in and . The colon is divided into three sections and takes part in solute absorption. The upper colon largely absorbs and SCFA and partially absorbs KCl. The middle colon absorbs and SCFA, with little net transfer of K+ and Cl−. The lower colon then slightly absorbs and water and secretes . There is no net movements of and SCFA found in the lower colon. When the common ostrich is in a dehydrated state, plasma osmolality, , , and ions all increase; however, ions return to controlled concentration. The common ostrich also experiences an increase in haematocrit, resulting in a hypovolemic state. Two antidiuretic hormones, Arginine vasotocin (AVT) and angiotensin (AII), are increased in blood plasma as a response to hyperosmolality and hypovolemia. AVT triggers antidiuretic hormone (ADH) which targets the nephrons of the kidney. ADH causes a reabsorption of water from the lumen of the nephron to the extracellular fluid osmotically. These extracellular fluids then drain into blood vessels, causing a rehydrating effect. This drainage prevents loss of water by both lowering volume and increasing concentration of the urine. Angiotensin, on the other hand, causes vasoconstriction on the systemic arterioles and acts as a dipsogen for ostriches. Both of these antidiuretic hormones work together to maintain water levels in the body that would normally be lost due to the osmotic stress of the arid environment. Ostriches are uricotelic, excreting nitrogen in the form of uric acid and related derivatives. Uric acid's low solubility in water gives a semi-solid paste consistency to the ostrich's nitrogenous waste. Thermoregulation Common ostriches are homeothermic endotherms; they regulate a constant body temperature via regulating their metabolic heat rate. They closely regulate their core body temperature, but their appendages may be cooler in comparison as found with regulating species. The temperature of their beak, neck surfaces, lower legs, feet, and toes are regulated through heat exchange with the environment. Up to 40% of their produced metabolic heat is dissipated across these structures, which account for about 12% of their total surface area. Total evaporative water loss (TEWL) is statistically lower in the common ostrich than in membering ratites. As ambient temperature increases, dry heat loss decreases, but evaporative heat loss increases because of increased respiration. As ostriches experience high ambient temperatures, circa , they become slightly hyperthermic; however, they can maintain a stable body temperature, around , for up to 8 hours in these conditions. When dehydrated, the common ostrich minimizes water loss, causing the body temperature to increase further. When the body heat is allowed to increase the temperature gradient between the common ostrich and ambient heat is equilibrated. Physical adaptations Common ostriches have developed a comprehensive set of behavioral adaptations for thermoregulation, such as altering their feathers. Common ostriches display a feather fluffing behavior that aids them in thermoregulation by regulating convective heat loss at high ambient temperatures. They may also physically seek out shade in times of high ambient temperatures. When feather fluffing, they contract their muscles to raise their feathers to increase the air space next to their skin. This air space provides an insulating thickness of . The ostrich will also expose the thermal windows of their unfeathered skin to enhance convective and radiative loss in times of heat stress. At higher ambient temperatures lower appendage temperature increases to difference from ambient temperature. Neck surfaces are around difference at most ambient temperatures, except when temperatures are around it was only above ambient. At low ambient temperatures the common ostrich utilizes feather flattening, which conserves body heat through insulation. The low conductance coefficient of air allows less heat to be lost to the environment. This flattening behavior compensate for common ostrich's rather poor cutaneous evaporative water loss (CEWL). These feather-heavy areas such as the body, thighs, and wings do not usually vary much from ambient temperatures due to this behavioural controls. This ostrich will also cover its legs to reduce heat loss to the environment, along with undergoing piloerection and shivering when faced with low ambient temperatures. Internal adaptations The use of countercurrent heat exchange with blood flow allows for regulated conservation/ elimination of heat of appendages. When ambient temperatures are low, heterotherms will constrict their arterioles to reduce heat loss along skin surfaces. The reverse occurs at high ambient temperatures, arterioles dilate to increase heat loss. At ambient temperatures below their body temperatures (thermal neutral zone (TNZ)), common ostriches decrease body surface temperatures so that heat loss occurs only across about 10% of total surface area. This 10% include critical areas that require blood flow to remain high to prevent freezing, such as their eyes. Their eyes and ears tend to be the warmest regions. It has been found that temperatures of lower appendages were no more than above ambient temperature, which minimizes heat exchange between feet, toes, wings, and legs. Both the Gular and air sacs, being close to body temperature, are the main contributors to heat and water loss. Surface temperature can be affected by the rate of blood flow to a certain area and also by the surface area of the surrounding tissue. The ostrich reduces blood flow to the trachea to cool itself and vasodilates to its blood vessels around the gular region to raise the temperature of the tissue. The air sacs are poorly vascularized but show an increased temperature, which aids in heat loss. Common ostriches have evolved a 'selective brain cooling' mechanism as a means of thermoregulation. This modality allows the common ostrich to manage the temperature of the blood going to the brain in response to the extreme ambient temperature of the surroundings. The morphology for heat exchange occurs via cerebral arteries and the ophthalmic rete, a network of arteries originating from the ophthalmic artery. The ophthalmic rete is analogous to the carotid rete found in mammals, as it also facilitates transfer of heat from arterial blood coming from the core to venous blood returning from the evaporative surfaces at the head. Researchers suggest that common ostriches also employ a 'selective brain warming' mechanism in response to cooler surrounding temperatures in the evenings. The brain was found to maintain a warmer temperature when compared to carotid arterial blood supply. Researchers hypothesize three mechanisms that could explain this finding: They first suggest a possible increase in metabolic heat production within the brain tissue itself to compensate for the colder arterial blood arriving from the core. They also speculate that there is an overall decrease in cerebral blood flow to the brain. Finally, they suggest that warm venous blood perfusion at the ophthalmic rete helps to warm the cerebral blood that supplies the hypothalamus. Further research will need to be done to find how this occurs. Breathing adaptations The common ostrich has no sweat glands, and under heat stress they rely on panting to reduce their body temperature. Panting increases evaporative heat (and water) loss from its respiratory surfaces, therefore forcing air and heat removal without the loss of metabolic salts. Panting allows the common ostrich to have a very effective respiratory evaporative water loss (REWL). Heat dissipated by respiratory evaporation increases linearly with ambient temperature, matching the rate of heat production. As a result of panting the common ostrich should eventually experience alkalosis. However, The CO2 concentration in the blood does not change when hot ambient temperatures are experienced. This effect is caused by a lung surface shunt. The lung is not completely shunted, allowing enough oxygen to fulfill the bird's metabolic needs. The common ostrich utilizes gular fluttering, rapid rhythmic contraction and relaxation of throat muscles, in a similar way to panting. Both these behaviors allow the ostrich to actively increase the rate of evaporative cooling. In hot temperatures water is lost via respiration. Moreover, varying surface temperatures within the respiratory tract contribute differently to overall heat and water loss through panting. The surface temperature of the gular area is , that of the tracheal area is between , and that of both anterior and posterior air sacs is . The long trachea, being cooler than body temperature, is a site of water evaporation. As ambient air becomes hotter, additional evaporation can take place lower in the trachea making its way to the posterior sacs, shunting the lung surface. The trachea acts as a buffer for evaporation because of the length and the controlled vascularization. The Gular is also heavily vascularized; its purpose is for cooling blood, but also evaporation, as previously stated. Air flowing through the trachea can be either laminar or turbulent depending on the state of the bird. When the common ostrich is breathing normally, under no heat stress, air flow is laminar. When the common ostrich is experiencing heat stress from the environment the air flow is considered turbulent. This suggests that laminar air flow causes little to no heat transfer, while under heat stress turbulent airflow can cause maximum heat transfer within the trachea. Metabolism Common ostriches are able to attain their necessary energetic requirements via the oxidation of absorbed nutrients. Much of the metabolic rate in animals is dependent upon their allometry, the relationship between body size to shape, anatomy, physiology, and behavior of an animal. Hence, it is plausible to state that metabolic rate in animals with larger masses is greater than animals with a smaller mass. When a bird is inactive and unfed, and the ambient temperature (i.e. in the thermo-neutral zone) is high, the energy expended is at its minimum. This level of expenditure is better known as the basal metabolic rate (BMR), and can be calculated by measuring the amount of oxygen consumed during various activities. Therefore, in common ostriches we see use of more energy when compared to smaller birds in absolute terms, but less per unit mass. A key point when looking at the common ostrich metabolism is to note that it is a non-passerine bird. Thus, BMR in ostriches is particularly low with a value of only 0.113 mL O2 g−1 h−1. This value can further be described using Kleiber's law, which relates the BMR to the body mass of an animal. Metabolic rate = 70M0.75 where M is body mass, and metabolic rate is measured in kcal per day. In common ostriches, a BMR (mL O2 g−1 h−1) = 389 kg0.73, describing a line parallel to the intercept with only about 60% in relation to other non-passerine birds. Along with BMR, energy is also needed for a range of other activities. If the ambient temperature is lower than the thermo-neutral zone, heat is produced to maintain body temperature. So, the metabolic rate in a resting, unfed bird, that is producing heat is known as the standard metabolic rate (SMR) or resting metabolic rate (RMR). The common ostrich SMR has been seen to be approximately 0.26 mL O2 g−1 h−1, almost 2.3 times the BMR. On another note, animals that engage in extensive physical activity employ substantial amounts of energy for power. This is known as the maximum metabolic scope. In an ostrich, it is seen to be at least 28 times greater than the BMR. Likewise, the daily energy turnover rate for an ostrich with access to free water is 12,700 kJ d−1, equivalent to 0.26 mL O2 g−1 h−1. Status and conservation The wild common ostrich population has declined drastically in the last 200 years, with most surviving birds in reserves or on farms. However, its range remains very large (), leading the IUCN and BirdLife International to treat it as a species of least concern. Of its five subspecies, the Arabian ostrich (S. c. syriacus) became extinct around 1966. North African ostrich populations are protected under Appendix I of the Convention on International Trade in Endangered Species (CITES) meaning commercial international trade is prohibited and non-commercial trade is strictly regulated. Humans Common ostriches have inspired cultures and civilizations for 5,000 years in Mesopotamia and African centres like Egypt and the Kingdom of Kush. A statue of Arsinoe II of Egypt riding a common ostrich was found in a tomb in Egypt. Hunter-gatherers in the Kalahari use ostrich eggshells as water containers, punching a hole in them. They also produce jewelry from it. The presence of such eggshells with engraved hatched symbols dating from the Howiesons Poort period of the Middle Stone Age at Diepkloof Rock Shelter in South Africa suggests common ostriches were an important part of human life as early as 60,000 BP. In Eastern Christianity it is common to hang decorated common ostrich eggs on the chains holding the oil lamps. The initial reason was probably to prevent mice and rats from climbing down the chain to eat the oil. Another, symbolical explanation is based in the fictitious tradition that female common ostriches do not sit on their eggs, but stare at them incessantly until they hatch out, because if they stop staring even for a second the egg will addle. This is equated to the obligation of the Christian to direct his entire attention towards God during prayer, lest the prayer be fruitless. "Head in the sand" myth Contrary to popular belief, ostriches do not bury their heads in sand to avoid danger. This myth likely began with Pliny the Elder (23–79 CE), who wrote that ostriches "imagine, when they have thrust their head and neck into a bush, that the whole of their body is concealed." This may have been a misunderstanding of their sticking their heads in the sand to swallow sand and pebbles to help digest their fibrous food, or, as National Geographic suggests, of the defensive behavior of lying low, so that they may appear from a distance to have their head buried. Another possible origin for the myth lies with the fact that ostriches keep their eggs in holes in the sand instead of nests and must rotate them using their beaks during incubation; digging the hole, placing the eggs, and rotating them might each be mistaken for an attempt to bury their heads in the sand. Economic use In Roman times, there was a demand for common ostriches to use in venatio games or cooking. They have been hunted and farmed for their feathers, which at various times have been popular for ornamentation in fashionable clothing (such as hats during the 19th century). Their skins are valued for their leather. In the 18th century they were almost hunted to extinction; farming for feathers began in the 19th century. At the start of the 20th century there were over 700,000 birds in captivity. The market for feathers collapsed after World War I, but commercial farming for feathers and later for skins and meat became widespread during the 1970s. Common ostriches have been farmed in South Africa since the beginning of the 19th century. According to Frank G. Carpenter, the English are credited with first taming common ostriches outside Cape Town. Farmers captured baby common ostriches and raised them successfully on their property, and they were able to obtain a crop of feathers every seven to eight months instead of killing wild common ostriches for their feathers. Feathers are still commercially harvested. It is claimed that common ostriches produce the strongest commercial leather. Common ostrich meat tastes similar to lean beef and is low in fat and cholesterol, as well as high in calcium, protein, and iron. It is considered to be both poultry and red meat. Uncooked, it is dark red or cherry red, a little darker than beef. Ostrich stew is a dish prepared using common ostrich meat. Some common ostrich farms also cater to agri-tourism, which may produce a substantial portion of the farm's income. This may include tours of the farmlands, souvenirs, or even ostrich rides. Attacks Common ostriches typically avoid humans in the wild, since they correctly assess humans as potential predators. If approached, they often run away, but sometimes ostriches can be very aggressive when threatened, especially if cornered, and may also attack if they feel the need to defend their territories or offspring. Similar behaviors are noted in captive or domesticated common ostriches, which retain the same natural instincts and can occasionally respond aggressively to stress. When attacking a person, common ostriches deliver slashing kicks with their powerful feet, armed with long claws, with which they can disembowel or kill a person with a single blow. In one study of common ostrich attacks, it was estimated that two to three attacks that result in serious injury or death occur each year in the area of Oudtshoorn, South Africa, where a large number of common ostrich farms are set next to both feral and wild common ostrich populations, making them statistically, the world's most dangerous bird. Racing In some countries, people race each other on the backs of common ostriches. The practice is common in Africa and is relatively unusual elsewhere. The common ostriches are ridden in the same way as horses with special saddles, reins, and bits. However, they are harder to manage than horses. The practice is becoming less common due to ethical concerns, and nowadays ostrich farms set a limit weight for people to ride ostriches, making the activity mostly suited for children and smaller adults. The racing is also a part of modern South African culture. Within the United States, a tourist attraction in Jacksonville, Florida, called 'The Ostrich Farm' opened up in 1892; it and its races became one of the most famous early attractions in the history of Florida. Likewise, the arts scene in Indio, California, consists of both ostrich and camel racing. Chandler, Arizona, hosts the annual "Ostrich Festival", which features common ostrich races. Racing has also occurred at many other locations such as Virginia City in Nevada, Canterbury Park in Minnesota, Prairie Meadows in Iowa, Ellis Park in Kentucky, and the Fairgrounds in New Orleans, Louisiana.
Biology and health sciences
Ratites
null
22573
https://en.wikipedia.org/wiki/Otorhinolaryngology
Otorhinolaryngology
Otorhinolaryngology ( , abbreviated ORL and also known as otolaryngology, otolaryngology–head and neck surgery (ORL–H&N or OHNS), or ear, nose, and throat (ENT)) is a surgical subspecialty within medicine that deals with the surgical and medical management of conditions of the head and neck. Doctors who specialize in this area are called otorhinolaryngologists, otolaryngologists, head and neck surgeons, or ENT surgeons or physicians. Patients seek treatment from an otorhinolaryngologist for diseases of the ear, nose, throat, base of the skull, head, and neck. These commonly include functional diseases that affect the senses and activities of eating, drinking, speaking, breathing, swallowing, and hearing. In addition, ENT surgery encompasses the surgical management of cancers and benign tumors and reconstruction of the head and neck as well as plastic surgery of the face, scalp, and neck. Etymology The term is a combination of Neo-Latin combining forms (oto- + rhino- + laryngo- + -logy) derived from four Ancient Greek words: (cf. Greek ). Training Otorhinolaryngologists are physicians (MD, DO, MBBS, MBChB, etc.) who complete both medical school and an average of five–seven years of post-graduate surgical training in ORL-H&N. In the United States, trainees complete at least five years of surgical residency training. This comprises three to six months of general surgical training and four and a half years in ORL-H&N specialist surgery. In Canada and the United States, practitioners complete a five-year residency training after medical school. Following residency training, some otolaryngologist-head & neck surgeons complete an advanced sub-specialty fellowship, where training can be one to two years in duration. Fellowships include head and neck surgical oncology, facial plastic surgery, rhinology and sinus surgery, neuro-otology, pediatric otolaryngology, and laryngology. In the United States and Canada, otorhinolaryngology is one of the most competitive specialties in medicine in which to obtain a residency position following medical school. In the United Kingdom, entrance to higher surgical training is competitive and involves a rigorous national selection process. The training programme consists of 6 years of higher surgical training after which trainees frequently undertake fellowships in a sub-speciality prior to becoming a consultant. The typical total length of education, training and post-secondary school is 12–14 years. Otolaryngology is among the more highly compensated surgical specialties in the United States. In 2022, the average annual income was $469,000. Sub-specialties (*Currently recognized by American Board of Medical Subspecialties) Topics by subspecialty Head and neck surgery Head and neck surgical oncology (field of surgery treating cancer/malignancy of the head and neck) Head and neck mucosal malignancy (cancer of the pink lining of the upper aerodigestive tract) Oral cancer (cancer of lips, gums, tongue, hard palate, cheek, floor of mouth) Oropharyngeal cancer (cancer of oropharynx, soft palate, tonsil, base of tongue) Larynx cancer (voice box cancer) Hypopharynx cancer (lower throat cancer) Sinonasal cancer Nasopharyngeal cancer Skin cancer of the head & neck Thyroid cancer Salivary gland cancer Head and neck sarcoma Endocrine surgery of the head and neck Thyroid surgery Parathyroid surgery Microvascular free flap reconstructive surgery Skull base surgery Otology and neurotology Study of diseases of the outer ear, middle ear and mastoid, and inner ear, and surrounding structures (such as the facial nerve and lateral skull base) Outer ear diseases Otitis externa – outer ear or ear canal inflammation Exostoses or Surfer's ear are bony growths in the outer ear canal Middle ear and mastoid diseases Otitis media – middle ear inflammation Perforated eardrum (hole in the eardrum due to infection, trauma, explosion or loud noise) Mastoiditis Inner ear diseases BPPV – benign paroxysmal positional vertigo Labyrinthitis/Vestibular neuronitis Ménière's disease/Endolymphatic hydrops Perilymphatic fistula Acoustic neuroma, vestibular schwannoma Facial nerve disease Idiopathic facial palsy (Bell's Palsy) Facial nerve tumors Ramsay Hunt Syndrome Symptoms Hearing loss Tinnitus (subjective noise in the ear) Aural fullness (sense of fullness in the ear) Otalgia (pain referring to the ear) Otorrhea (fluid draining from the ear) Vertigo Imbalance Rhinology Rhinology includes nasal dysfunction and sinus diseases. Nasal obstruction Inferior turbinate hypertrophy Nasal septum deviation Chronic sinusitis with nasal polyps Sinusitis – acute, chronic Environmental allergies Rhinitis Pituitary tumor Empty nose syndrome Severe or recurrent epistaxis Pediatric otorhinolaryngology Adenoidectomy Caustic ingestion Cricotracheal resection Decannulation Laryngomalacia Laryngotracheal reconstruction Myringotomy and tubes Obstructive sleep apnea – pediatric Tonsillectomy Laryngology Dysphonia/hoarseness Laryngitis Reinke's edema Vocal cord nodules and polyps Spasmodic dysphonia Tracheostomy Cancer of the larynx Vocology – science and practice of voice habilitation Facial plastic and reconstructive surgery Facial plastic and reconstructive surgery is a one-year fellowship open to otorhinolaryngologists who wish to begin learning the aesthetic and reconstructive surgical principles of the head, face, and neck pioneered by the specialty of Plastic and Reconstructive Surgery. Rhinoplasty and septoplasty Facelift (rhytidectomy) Browlift Blepharoplasty Otoplasty Genioplasty Injectable cosmetic treatments Trauma to the face Nasal bone fracture Mandible fracture Orbital fracture Frontal sinus fracture Complex lacerations and soft tissue damage Skin cancer (e.g. Basal Cell Carcinoma) Sleep surgery Sleep surgery encompasses any surgery that helps alleviate obstructive sleep apnea and can anatomically include any part of the upper airway. Nasal cavity / nasopharynx Septoplasty Adenoidectomy (especially in pediatrics) Oral cavity / oropharynx Tonsillectomy (especially in pediatrics) Uvulopalatopharyngoplasty Transoral midline glossectomy Genioglossus advancement Other Hyoid suspension Maxillomandibular advancement Hypoglossal nerve stimulator implant (Inspire) Microvascular reconstruction repair Microvascular reconstruction repair is a common operation that is done on patients who see an otorhinolaryngologist. It is a surgical procedure that involves moving a composite piece of tissue from the patient's body and to the head and/or neck. Microvascular head-and-neck reconstruction is used to treat head-and-neck cancers, including those of the larynx and pharynx, oral cavity, salivary glands, jaws, calvarium, sinuses, tongue and skin. The tissue that is most commonly moved during this procedure is from the arms, legs, and back, and can come from the skin, bone, fat, and/or muscle. When performing this procedure, the decision on which is moved is determined on the reconstructive needs. Transfer of the tissue to the head and neck allows surgeons to rebuild the patient's jaw, optimize tongue function, and reconstruct the throat. When the pieces of tissue are moved, they require their own blood supply for a chance of survival in their new location. After the surgery is completed, the blood vessels that feed the tissue transplant are reconnected to new blood vessels in the neck. These blood vessels are typically no more than 1 to 3 millimeters in diameter, which means that these connections need to be made with a microscope, which is why the procedure is called "microvascular surgery".
Biology and health sciences
Fields of medicine
null
22581
https://en.wikipedia.org/wiki/Estrogen
Estrogen
Estrogen (also spelled oestrogen in British English; see spelling differences) is a category of sex hormone responsible for the development and regulation of the female reproductive system and secondary sex characteristics. There are three major endogenous estrogens that have estrogenic hormonal activity: estrone (E1), estradiol (E2), and estriol (E3). Estradiol, an estrane, is the most potent and prevalent. Another estrogen called estetrol (E4) is produced only during pregnancy. Estrogens are synthesized in all vertebrates and some insects. Quantitatively, estrogens circulate at lower levels than androgens in both men and women. While estrogen levels are significantly lower in males than in females, estrogens nevertheless have important physiological roles in males. Like all steroid hormones, estrogens readily diffuse across the cell membrane. Once inside the cell, they bind to and activate estrogen receptors (ERs) which in turn modulate the expression of many genes. Additionally, estrogens bind to and activate rapid-signaling membrane estrogen receptors (mERs), such as GPER (GPR30). In addition to their role as natural hormones, estrogens are used as medications, for instance in menopausal hormone therapy, hormonal birth control and feminizing hormone therapy for transgender women, intersex people, and nonbinary people. Synthetic and natural estrogens have been found in the environment and are referred to as xenoestrogens. Estrogens are among the wide range of endocrine-disrupting compounds (EDCs) and can cause health issues and reproductive disfunction in both wildlife and humans. Types and examples The four major naturally occurring estrogens in women are estrone (E1), estradiol (E2), estriol (E3), and estetrol (E4). Estradiol (E2) is the predominant estrogen during reproductive years both in terms of absolute serum levels as well as in terms of estrogenic activity. During menopause, estrone is the predominant circulating estrogen and during pregnancy estriol is the predominant circulating estrogen in terms of serum levels. Given by subcutaneous injection in mice, estradiol is about 10-fold more potent than estrone and about 100-fold more potent than estriol. Thus, estradiol is the most important estrogen in non-pregnant females who are between the menarche and menopause stages of life. However, during pregnancy this role shifts to estriol, and in postmenopausal women estrone becomes the primary form of estrogen in the body. Another type of estrogen called estetrol (E4) is produced only during pregnancy. All of the different forms of estrogen are synthesized from androgens, specifically testosterone and androstenedione, by the enzyme aromatase. Minor endogenous estrogens, the biosyntheses of which do not involve aromatase, include 27-hydroxycholesterol, dehydroepiandrosterone (DHEA), 7-oxo-DHEA, 7α-hydroxy-DHEA, 16α-hydroxy-DHEA, 7β-hydroxyepiandrosterone, androstenedione (A4), androstenediol (A5), 3α-androstanediol, and 3β-androstanediol. Some estrogen metabolites, such as the catechol estrogens 2-hydroxyestradiol, 2-hydroxyestrone, 4-hydroxyestradiol, and 4-hydroxyestrone, as well as 16α-hydroxyestrone, are also estrogens with varying degrees of activity. The biological importance of these minor estrogens is not entirely clear. Biological function The actions of estrogen are mediated by the estrogen receptor (ER), a dimeric nuclear protein that binds to DNA and controls gene expression. Like other steroid hormones, estrogen enters passively into the cell where it binds to and activates the estrogen receptor. The estrogen:ER complex binds to specific DNA sequences called a hormone response element to activate the transcription of target genes (in a study using an estrogen-dependent breast cancer cell line as model, 89 such genes were identified). Since estrogen enters all cells, its actions are dependent on the presence of the ER in the cell. The ER is expressed in specific tissues including the ovary, uterus and breast. The metabolic effects of estrogen in postmenopausal women have been linked to the genetic polymorphism of the ER. While estrogens are present in both men and women, they are usually present at significantly higher levels in women of reproductive age. They promote the development of female secondary sexual characteristics, such as breasts, darkening and enlargement of nipples, and thickening of the endometrium and other aspects of regulating the menstrual cycle. In males, estrogen regulates certain functions of the reproductive system important to the maturation of sperm and may be necessary for a healthy libido. Overview of actions Musculoskeletal Anabolic: Increases muscle mass and strength, speed of muscle regeneration, and bone density, increased sensitivity to exercise, protection against muscle damage, stronger collagen synthesis, increases the collagen content of connective tissues, tendons, and ligaments, but also decreases stiffness of tendons and ligaments (especially during menstruation). Decreased stiffness of tendons gives women much lower predisposition to muscle strains but soft ligaments are much more prone to injuries (ACL tears are 2-8x more common among women than men). Reduce bone resorption, increase bone formation In mice, estrogen has been shown to increase the proportion of the fastest-twitch (type IIX) muscle fibers by over 40%. Metabolic Anti-inflammatory properties Accelerate metabolism Gynoid fat distribution: increased fat storage or estrogenic fat in some body parts such as breasts, buttocks, and legs but decreased abdominal and visceral fat (androgenic obesity). Estradiol also regulates energy expenditure, body weight homeostasis, and seems to have much stronger anti-obesity effects than testosterone in general. Other structural Maintenance of vessels and skin Protein synthesis Increase hepatic production of binding proteins Increase production of the hepatokine adropin. Coagulation Increase circulating level of factors 2, 7, 9, 10, plasminogen Decrease antithrombin III Increase platelet adhesiveness Increase vWF (estrogen -> Angiotensin II -> Vasopressin) Increase PAI-1 and PAI-2 also through Angiotensin II Lipid Increase HDL, triglyceride Decrease LDL, fat deposition Fluid balance Salt (sodium) and water retention, including facial swelling and edema Estrogen is associated with edema, including facial and abdominal swelling. Melanin Estrogen is known to cause darkening of skin, especially in the face and areolae. Pale skinned women will develop browner and yellower skin during pregnancy, as a result of the increase of estrogen, known as the "mask of pregnancy". Estrogen may explain why women have darker eyes than men, and also a lower risk of skin cancer than men; a European study found that women generally have darker skin than men. Lung function Promotes lung function by supporting alveoli (in rodents but probably in humans). Sexual Mediate formation of female secondary sex characteristics Stimulate endometrial growth Increase uterine growth Increase vaginal lubrication Thicken the vaginal wall Uterus lining Estrogen together with progesterone promotes and maintains the uterus lining in preparation for implantation of fertilized egg and maintenance of uterus function during gestation period, also upregulates oxytocin receptor in myometrium Ovulation Surge in estrogen level induces the release of luteinizing hormone, which then triggers ovulation by releasing the egg from the Graafian follicle in the ovary. Sexual behavior Estrogen is required for female mammals to engage in lordosis behavior during estrus (when animals are "in heat"). This behavior is required for sexual receptivity in these mammals and is regulated by the ventromedial nucleus of the hypothalamus. Sex drive is dependent on androgen levels only in the presence of estrogen, but without estrogen, free testosterone level actually decreases sexual desire (instead of increasing sex drive), as demonstrated for those women who have hypoactive sexual desire disorder, and the sexual desire in these women can be restored by administration of estrogen (using oral contraceptive). Female pubertal development Estrogens are responsible for the development of female secondary sexual characteristics during puberty, including breast development, widening of the hips, and female fat distribution. Conversely, androgens are responsible for pubic and body hair growth, as well as acne and axillary odor. Breast development Estrogen, in conjunction with growth hormone (GH) and its secretory product insulin-like growth factor 1 (IGF-1), is critical in mediating breast development during puberty, as well as breast maturation during pregnancy in preparation of lactation and breastfeeding. Estrogen is primarily and directly responsible for inducing the ductal component of breast development, as well as for causing fat deposition and connective tissue growth. It is also indirectly involved in the lobuloalveolar component, by increasing progesterone receptor expression in the breasts and by inducing the secretion of prolactin. Allowed for by estrogen, progesterone and prolactin work together to complete lobuloalveolar development during pregnancy. Androgens such as testosterone powerfully oppose estrogen action in the breasts, such as by reducing estrogen receptor expression in them. Female reproductive system Estrogens are responsible for maturation and maintenance of the vagina and uterus, and are also involved in ovarian function, such as maturation of ovarian follicles. In addition, estrogens play an important role in regulation of gonadotropin secretion. For these reasons, estrogens are required for female fertility. Neuroprotection and DNA repair Estrogen regulated DNA repair mechanisms in the brain have neuroprotective effects. Estrogen regulates the transcription of DNA base excision repair genes as well as the translocation of the base excision repair enzymes between different subcellular compartments. Brain and behavior Sex drive Estrogens are involved in libido (sex drive) in both women and men. Cognition Verbal memory scores are frequently used as one measure of higher level cognition. These scores vary in direct proportion to estrogen levels throughout the menstrual cycle, pregnancy, and menopause. Furthermore, estrogens when administered shortly after natural or surgical menopause prevents decreases in verbal memory. In contrast, estrogens have little effect on verbal memory if first administered years after menopause. Estrogens also have positive influences on other measures of cognitive function. However the effect of estrogens on cognition is not uniformly favorable and is dependent on the timing of the dose and the type of cognitive skill being measured. The protective effects of estrogens on cognition may be mediated by estrogen's anti-inflammatory effects in the brain. Studies have also shown that the Met allele gene and level of estrogen mediates the efficiency of prefrontal cortex dependent working memory tasks. Researchers have urged for further research to illuminate the role of estrogen and its potential for improvement on cognitive function. Mental health Estrogen is considered to play a significant role in women's mental health. Sudden estrogen withdrawal, fluctuating estrogen, and periods of sustained low estrogen levels correlate with a significant lowering of mood. Clinical recovery from postpartum, perimenopause, and postmenopause depression has been shown to be effective after levels of estrogen were stabilized and/or restored. Menstrual exacerbation (including menstrual psychosis) is typically triggered by low estrogen levels, and is often mistaken for premenstrual dysphoric disorder. Compulsions in male lab mice, such as those in obsessive-compulsive disorder (OCD), may be caused by low estrogen levels. When estrogen levels were raised through the increased activity of the enzyme aromatase in male lab mice, OCD rituals were dramatically decreased. Hypothalamic protein levels in the gene COMT are enhanced by increasing estrogen levels which are believed to return mice that displayed OCD rituals to normal activity. Aromatase deficiency is ultimately suspected which is involved in the synthesis of estrogen in humans and has therapeutic implications in humans having obsessive-compulsive disorder. Local application of estrogen in the rat hippocampus has been shown to inhibit the re-uptake of serotonin. Contrarily, local application of estrogen has been shown to block the ability of fluvoxamine to slow serotonin clearance, suggesting that the same pathways which are involved in SSRI efficacy may also be affected by components of local estrogen signaling pathways. Parenthood Studies have also found that fathers had lower levels of cortisol and testosterone but higher levels of estrogen (estradiol) than did non-fathers. Binge eating Estrogen may play a role in suppressing binge eating. Hormone replacement therapy using estrogen may be a possible treatment for binge eating behaviors in females. Estrogen replacement has been shown to suppress binge eating behaviors in female mice. The mechanism by which estrogen replacement inhibits binge-like eating involves the replacement of serotonin (5-HT) neurons. Women exhibiting binge eating behaviors are found to have increased brain uptake of neuron 5-HT, and therefore less of the neurotransmitter serotonin in the cerebrospinal fluid. Estrogen works to activate 5-HT neurons, leading to suppression of binge like eating behaviors. It is also suggested that there is an interaction between hormone levels and eating at different points in the female menstrual cycle. Research has predicted increased emotional eating during hormonal flux, which is characterized by high progesterone and estradiol levels that occur during the mid-luteal phase. It is hypothesized that these changes occur due to brain changes across the menstrual cycle that are likely a genomic effect of hormones. These effects produce menstrual cycle changes, which result in hormone release leading to behavioral changes, notably binge and emotional eating. These occur especially prominently among women who are genetically vulnerable to binge eating phenotypes. Binge eating is associated with decreased estradiol and increased progesterone. Klump et al. Progesterone may moderate the effects of low estradiol (such as during dysregulated eating behavior), but that this may only be true in women who have had clinically diagnosed binge episodes (BEs). Dysregulated eating is more strongly associated with such ovarian hormones in women with BEs than in women without BEs. The implantation of 17β-estradiol pellets in ovariectomized mice significantly reduced binge eating behaviors and injections of GLP-1 in ovariectomized mice decreased binge-eating behaviors. The associations between binge eating, menstrual-cycle phase and ovarian hormones correlated. Masculinization in rodents In rodents, estrogens (which are locally aromatized from androgens in the brain) play an important role in psychosexual differentiation, for example, by masculinizing territorial behavior; the same is not true in humans. In humans, the masculinizing effects of prenatal androgens on behavior (and other tissues, with the possible exception of effects on bone) appear to act exclusively through the androgen receptor. Consequently, the utility of rodent models for studying human psychosexual differentiation has been questioned. Skeletal system Estrogens are responsible for both the pubertal growth spurt, which causes an acceleration in linear growth, and epiphyseal closure, which limits height and limb length, in both females and males. In addition, estrogens are responsible for bone maturation and maintenance of bone mineral density throughout life. Due to hypoestrogenism, the risk of osteoporosis increases during menopause. Cardiovascular system Women are less impacted by heart disease due to vasculo-protective action of estrogen which helps in preventing atherosclerosis. It also helps in maintaining the delicate balance between fighting infections and protecting arteries from damage thus lowering the risk of cardiovascular disease. During pregnancy, high levels of estrogens increase coagulation and the risk of venous thromboembolism. Estrogen has been shown to upregulate the peptide hormone adropin. Immune system The effect of estrogen on the immune system is in general described as Th2 favoring, rather than suppressive, as is the case of the effect of male sex hormone - testosterone. Indeed, women respond better to vaccines, infections and are generally less likely to develop cancer, the tradeoff of this is that they are more likely to develop an autoimmune disease. The Th2 shift manifests itself in a decrease of cellular immunity and increase in humoral immunity (antibody production) shifts it from cellular to humoral by downregulating cell-mediated immunity and enhancing Th2 immune response by stimulating IL-4 production and Th2 differentiation. Type 1 and type 17 immune responses are downregulated, likely to be at least partially due to IL-4, which inhibits Th1. Effect of estrogen on different immune cells' cell types is in line with its Th2 bias. Activity of basophils, eosinophils, M2 macrophages and is enhanced, whereas activity of NK cells is downregulated. Conventional dendritic cells are biased towards Th2 under the influence of estrogen, whereas plasmacytoid dendritic cells, key players in antiviral defence, have increased IFN-g secretion. Estrogen also influences B cells by increasing their survival, proliferation, differentiation and function, which corresponds with higher antibody and B cell count generally detected in women. On a molecular level estrogen induces the above-mentioned effects on cell via acting on intracellular receptors termed ER α and ER β, which upon ligation form either homo or heterodimers. The genetic and nongenetic targets of the receptors differ between homo and heterodimers. Ligation of these receptors allows them to translocate to the nucleus and act as transcription factors either by binding estrogen response elements (ERE) on DNA or binding DNA together with other transcriptional factors e.g. Nf-kB or AP-1, both of which result in RNA polymerase recruitment and further chromatin remodelation. A non-transcriptional response to oestrogen stimulation was also documented (termed membrane-initiated steroid signalling, MISS). This pathway stimulates the ERK and PI3K/AKT pathways, which are known to increase cellular proliferation and affect chromatin remodelation. Associated conditions Researchers have implicated estrogens in various estrogen-dependent conditions, such as ER-positive breast cancer, as well as a number of genetic conditions involving estrogen signaling or metabolism, such as estrogen insensitivity syndrome, aromatase deficiency, and aromatase excess syndrome. High estrogen can amplify stress-hormone responses in stressful situations. Biochemistry Biosynthesis Estrogens, in females, are produced primarily by the ovaries, and during pregnancy, the placenta. Follicle-stimulating hormone (FSH) stimulates the ovarian production of estrogens by the granulosa cells of the ovarian follicles and corpora lutea. Some estrogens are also produced in smaller amounts by other tissues such as the liver, pancreas, bone, adrenal glands, skin, brain, adipose tissue, and the breasts. These secondary sources of estrogens are especially important in postmenopausal women. The pathway of estrogen biosynthesis in extragonadal tissues is different. These tissues are not able to synthesize C19 steroids, and therefore depend on C19 supplies from other tissues and the level of aromatase. In females, synthesis of estrogens starts in theca interna cells in the ovary, by the synthesis of androstenedione from cholesterol. Androstenedione is a substance of weak androgenic activity which serves predominantly as a precursor for more potent androgens such as testosterone as well as estrogen. This compound crosses the basal membrane into the surrounding granulosa cells, where it is converted either immediately into estrone, or into testosterone and then estradiol in an additional step. The conversion of androstenedione to testosterone is catalyzed by 17β-hydroxysteroid dehydrogenase (17β-HSD), whereas the conversion of androstenedione and testosterone into estrone and estradiol, respectively is catalyzed by aromatase, enzymes which are both expressed in granulosa cells. In contrast, granulosa cells lack 17α-hydroxylase and 17,20-lyase, whereas theca cells express these enzymes and 17β-HSD but lack aromatase. Hence, both granulosa and theca cells are essential for the production of estrogen in the ovaries. Estrogen levels vary through the menstrual cycle, with levels highest near the end of the follicular phase just before ovulation. Note that in males, estrogen is also produced by the Sertoli cells when FSH binds to their FSH receptors. Distribution Estrogens are plasma protein bound to albumin and/or sex hormone-binding globulin in the circulation. Metabolism Estrogens are metabolized via hydroxylation by cytochrome P450 enzymes such as CYP1A1 and CYP3A4 and via conjugation by estrogen sulfotransferases (sulfation) and UDP-glucuronyltransferases (glucuronidation). In addition, estradiol is dehydrogenated by 17β-hydroxysteroid dehydrogenase into the much less potent estrogen estrone. These reactions occur primarily in the liver, but also in other tissues. Excretion Estrogens are inactivated primarily by the kidneys and liver and excreted via the gastrointestinal tract in the form of conjugates, found in feces, bile, and urine. Medical use Estrogens are used as medications, mainly in hormonal contraception, hormone replacement therapy, and to treat gender dysphoria in transgender women and other transfeminine individuals as part of feminizing hormone therapy. Chemistry The estrogen steroid hormones are estrane steroids. History In 1929, Adolf Butenandt and Edward Adelbert Doisy independently isolated and purified estrone, the first estrogen to be discovered. Then, estriol and estradiol were discovered in 1930 and 1933, respectively. Shortly following their discovery, estrogens, both natural and synthetic, were introduced for medical use. Examples include estriol glucuronide (Emmenin, Progynon), estradiol benzoate, conjugated estrogens (Premarin), diethylstilbestrol, and ethinylestradiol. The word estrogen derives from Ancient Greek. It is derived from "oestros" (a periodic state of sexual activity in female mammals), and genos (generating). It was first published in the early 1920s and referenced as "oestrin". With the years, American English adapted the spelling of estrogen to fit with its phonetic pronunciation. Society and culture Etymology The name estrogen is derived from the Greek (), literally meaning "verve" or "inspiration" but figuratively sexual passion or desire, and the suffix -gen, meaning "producer of". Environment A range of synthetic and natural substances that possess estrogenic activity have been identified in the environment and are referred to xenoestrogens. Synthetic substances such as bisphenol A as well as metalloestrogens (e.g., cadmium). Plant products with estrogenic activity are called phytoestrogens (e.g., coumestrol, daidzein, genistein, miroestrol). Those produced by fungi are known as mycoestrogens (e.g., zearalenone). Estrogens are among the wide range of endocrine-disrupting compounds (EDCs) because they have high estrogenic potency. When an EDC makes its way into the environment, it may cause male reproductive dysfunction to wildlife and humans. The estrogen excreted from farm animals makes its way into fresh water systems. During the germination period of reproduction the fish are exposed to low levels of estrogen which may cause reproductive dysfunction to male fish. Cosmetics Some hair shampoos on the market include estrogens and placental extracts; others contain phytoestrogens. In 1998, there were case reports of four prepubescent African-American girls developing breasts after exposure to these shampoos. In 1993, the FDA determined that not all over-the-counter topically applied hormone-containing drug products for human use are generally recognized as safe and effective and are misbranded. An accompanying proposed rule deals with cosmetics, concluding that any use of natural estrogens in a cosmetic product makes the product an unapproved new drug and that any cosmetic using the term "hormone" in the text of its labeling or in its ingredient statement makes an implied drug claim, subjecting such a product to regulatory action. In addition to being considered misbranded drugs, products claiming to contain placental extract may also be deemed to be misbranded cosmetics if the extract has been prepared from placentas from which the hormones and other biologically active substances have been removed and the extracted substance consists principally of protein. The FDA recommends that this substance be identified by a name other than "placental extract" and describing its composition more accurately because consumers associate the name "placental extract" with a therapeutic use of some biological activity.
Biology and health sciences
Biochemistry and molecular biology
null
22594
https://en.wikipedia.org/wiki/Omega-3%20fatty%20acid
Omega-3 fatty acid
Omega−3 fatty acids, also called omega−3 oils, ω−3 fatty acids or n−3 fatty acids, are polyunsaturated fatty acids (PUFAs) characterized by the presence of a double bond three atoms away from the terminal methyl group in their chemical structure. They are widely distributed in nature, are important constituents of animal lipid metabolism, and play an important role in the human diet and in human physiology. The three types of omega−3 fatty acids involved in human physiology are α-linolenic acid (ALA), eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA). ALA can be found in plants, while DHA and EPA are found in algae and fish. Marine algae and phytoplankton are primary sources of omega−3 fatty acids. DHA and EPA accumulate in fish that eat these algae. Common sources of plant oils containing ALA include walnuts, edible seeds, and flaxseeds as well as hempseed oil, while sources of EPA and DHA include fish and fish oils, and algae oil. Almost without exception, animals are unable to synthesize the essential omega−3 fatty acid ALA and can only obtain it through diet. However, they can use ALA, when available, to form EPA and DHA, by creating additional double bonds along its carbon chain (desaturation) and extending it (elongation). Namely, ALA (18 carbons and 3 double bonds) is used to make EPA (20 carbons and 5 double bonds), which is then used to make DHA (22 carbons and 6 double bonds). The ability to make the longer-chain omega−3 fatty acids from ALA may be impaired in aging. In foods exposed to air, unsaturated fatty acids are vulnerable to oxidation and rancidity. There is no high-quality evidence that dietary supplementation with omega−3 fatty acids reduces the risk of cancer or cardiovascular disease. Fish oil supplement studies have failed to support claims of preventing heart attacks or strokes or any vascular disease outcomes. History In 1929, George and Mildred Burr discovered that fatty acids were critical to health. If fatty acids were absent from the diet, a life-threatening deficiency syndrome ensued. The Burrs coined the phrase "essential fatty acids". Since then, researchers have shown a growing interest in unsaturated essential fatty acids as they form the framework for the organism's cell membranes. Subsequently, awareness of the health benefits of essential fatty acids has dramatically increased since the 1980s. On September 8, 2004, the U.S. Food and Drug Administration gave "qualified health claim" status to EPA and DHA omega−3 fatty acids, stating, "supportive but not conclusive research shows that consumption of EPA and DHA [omega−3] fatty acids may reduce the risk of coronary heart disease". This updated and modified their health risk advice letter of 2001 (see below). The Canadian Food Inspection Agency has recognized the importance of DHA omega−3 and permits the following claim for DHA: "DHA, an omega−3 fatty acid, supports the normal physical development of the brain, eyes, and nerves primarily in children under two years of age." Historically, whole food diets contained sufficient amounts of omega−3, but because omega−3 is readily oxidized, the trend toward shelf-stable processed foods has led to a deficiency in omega−3 in manufactured foods. Nomenclature The terms ω−3 ("omega−3") fatty acid and n−3 fatty acid are derived from the nomenclature of organic chemistry. One way in which an unsaturated fatty acid is named is determined by the location, in its carbon chain, of the double bond which is closest to the methyl end of the molecule. In general terminology, n (or ω) represents the locant of the methyl end of the molecule, while the number n−x (or ω−x) refers to the locant of its nearest double bond. Thus, in omega−3 fatty acids in particular, there is a double bond located at the carbon numbered 3, starting from the methyl end of the fatty acid chain. This classification scheme is useful since most chemical changes occur at the carboxyl end of the molecule, while the methyl group and its nearest double bond are unchanged in most chemical or enzymatic reactions. In the expressions n−x or ω−x, the symbol is a minus sign rather than a hyphen (or dash), although it is never read as such. Also, the symbol n (or ω) represents the locant of the methyl end, counted from the carboxyl end of the fatty acid carbon chain. For instance, in an omega−3 fatty acid with 18 carbon atoms (see illustration), where the methyl end is at location 18 from the carboxyl end, n (or ω) represents the number 18, and the notation n−3 (or ω−3) represents the subtraction 18−3 = 15, where 15 is the locant of the double bond which is closest to the methyl end, counted from the carboxyl end of the chain. Although n and ω (omega) are synonymous, the IUPAC recommends that n be used to identify the highest carbon number of a fatty acid. Nevertheless, the more common name – omega−3 fatty acid – is used in both the lay media and scientific literature. Example For example, α-linolenic acid (ALA; illustration) is an 18-carbon chain having three double bonds, the first located at the third carbon from the methyl end of the fatty acid chain. Hence, it is an omega−3 fatty acid. Counting from the other end of the chain, that is the carboxyl end, the three double bonds are located at carbons 9, 12, and 15. These three locants are typically indicated as Δ9c, Δ12c, Δ15c, or cisΔ9, cisΔ12, cisΔ15, or cis-cis-cis-Δ9,12,15, where c or cis means that the double bonds have a cis configuration. α-Linolenic acid is polyunsaturated (containing more than one double bond) and is also described by a lipid number, 18:3, meaning that there are 18 carbon atoms and 3 double bonds. Chemistry An omega−3 fatty acid is a fatty acid with multiple double bonds, where the first double bond is between the third and fourth carbon atoms from the end of the carbon atom chain. "Short-chain" omega−3 fatty acids have a chain of 18 carbon atoms or less, while "long-chain" omega−3 fatty acids have a chain of 20 or more. Three omega−3 fatty acids are important in human physiology, α-linolenic acid (18:3, n−3; ALA), eicosapentaenoic acid (20:5, n−3; EPA), and docosahexaenoic acid (22:6, n−3; DHA). These three polyunsaturates have either 3, 5, or 6 double bonds in a carbon chain of 18, 20, or 22 carbon atoms, respectively. As with most naturally-produced fatty acids, all double bonds are in the cis-configuration, in other words, the two hydrogen atoms are on the same side of the double bond; and the double bonds are interrupted by methylene bridges (--), so that there are two single bonds between each pair of adjacent double bonds. The atoms at bis-allylic (between double bonds) sites are prone to oxidation by free radicals. Replacement of hydrogen atoms with deuterium atoms in this location protects the omega−3 fatty acid from lipid peroxidation and ferroptosis. List of omega−3 fatty acids This table lists several different names for the most common omega−3 fatty acids found in nature. Forms Omega−3 fatty acids occur naturally in two forms, triglycerides and phospholipids. In the triglycerides, they, together with other fatty acids, are bonded to glycerol; three fatty acids are attached to glycerol. Phospholipid omega−3 is composed of two fatty acids attached to a phosphate group via glycerol. The triglycerides can be converted to the free fatty acid or to methyl or ethyl esters, and the individual esters of omega−3 fatty acids are available. Mechanism of action The 'essential' fatty acids were given their name when researchers found that they are essential to normal growth in young children and animals. The omega−3 fatty acid DHA, also known as docosahexaenoic acid, is found in high abundance in the human brain. It is produced by a desaturation process, but humans lack the desaturase enzyme, which acts to insert double bonds at the ω6 and ω3 position. Therefore, the ω6 and ω3 polyunsaturated fatty acids cannot be synthesized, are appropriately called essential fatty acids, and must be obtained from the diet. In 1964, it was discovered that enzymes found in sheep tissues convert omega−6 arachidonic acid into the inflammatory agent, prostaglandin E2, which is involved in the immune response of traumatized and infected tissues. By 1979, eicosanoids were further identified, including thromboxanes, prostacyclins, and leukotrienes. The eicosanoids typically have a short period of activity in the body, starting with synthesis from fatty acids and ending with metabolism by enzymes. If the rate of synthesis exceeds the rate of metabolism, the excess eicosanoids may have deleterious effects. Researchers found that certain omega−3 fatty acids are also converted into eicosanoids and docosanoids, but at a slower rate. If both omega−3 and omega−6 fatty acids are present, they will "compete" to be transformed, so the ratio of long-chain omega−3:omega−6 fatty acids directly affects the type of eicosanoids that are produced. Interconversion Conversion efficiency of ALA to EPA and DHA Humans can convert short-chain omega−3 fatty acids to long-chain forms (EPA, DHA) with an efficiency below 5%. The omega−3 conversion efficiency is greater in women than in men, but less studied. Higher ALA and DHA values found in plasma phospholipids of women may be due to the higher activity of desaturases, especially that of delta-6-desaturase. These conversions occur competitively with omega−6 fatty acids, which are essential closely related chemical analogues that are derived from linoleic acid. They both utilize the same desaturase and elongase proteins in order to synthesize inflammatory regulatory proteins. The products of both pathways are vital for growth making a balanced diet of omega−3 and omega−6 important to an individual's health. A balanced intake ratio of 1:1 was believed to be ideal in order for proteins to be able to synthesize both pathways sufficiently, but this has been controversial as of recent research. The conversion of ALA to EPA and further to DHA in humans has been reported to be limited, but varies with individuals. Women have higher ALA-to-DHA conversion efficiency than men, which is presumed to be due to the lower rate of use of dietary ALA for beta-oxidation. One preliminary study showed that EPA can be increased by lowering the amount of dietary linoleic acid, and DHA can be increased by elevating intake of dietary ALA. Omega−6 to omega−3 ratio Human diet has changed rapidly in recent centuries resulting in a reported increased diet of omega−6 in comparison to omega−3. The rapid evolution of human diet away from a 1:1 omega−3 and omega−6 ratio, such as during the Neolithic Agricultural Revolution, has presumably been too fast for humans to have adapted to biological profiles adept at balancing omega−3 and omega−6 ratios of 1:1. This is commonly believed to be the reason why modern diets are correlated with many inflammatory disorders. While omega−3 polyunsaturated fatty acids may be beneficial in preventing heart disease in humans, the level of omega−6 polyunsaturated fatty acids (and, therefore, the ratio) does not matter. Both omega−6 and omega−3 fatty acids are essential: humans must consume them in their diet. Omega−6 and omega−3 eighteen-carbon polyunsaturated fatty acids compete for the same metabolic enzymes, thus the omega−6:omega−3 ratio of ingested fatty acids has significant influence on the ratio and rate of production of eicosanoids, a group of hormones intimately involved in the body's inflammatory and homeostatic processes, which include the prostaglandins, leukotrienes, and thromboxanes, among others. Altering this ratio can change the body's metabolic and inflammatory state. Metabolites of omega−6 are more inflammatory (esp. arachidonic acid) than those of omega−3. However, in terms of heart health, omega−6 fatty acids are less harmful than they are presumed to be. A meta-analysis of six randomized trials found that replacing saturated fat with omega−6 fats reduced the risk of coronary events by 24%. A healthy ratio of omega−6 to omega−3 is needed; healthy ratios, according to some authors, range from 1:1 to 1:4. Other authors believe that a ratio of 4:1 (4 times as much omega−6 as omega−3) is already healthy. Typical Western diets provide ratios of between 10:1 and 30:1 (i.e., dramatically higher levels of omega−6 than omega−3). The ratios of omega−6 to omega−3 fatty acids in some common vegetable oils are: canola 2:1, hemp 2–3:1, soybean 7:1, olive 3–13:1, sunflower (no omega−3), flax 1:3, cottonseed (almost no omega−3), peanut (no omega−3), grapeseed oil (almost no omega−3) and corn oil 46:1. Biochemistry Transporters DHA in the form of lysophosphatidylcholine is transported into the brain by a membrane transport protein, MFSD2A, which is exclusively expressed in the endothelium of the blood–brain barrier. Dietary sources Dietary recommendations In the United States, the Institute of Medicine publishes a system of Dietary Reference Intakes, which includes Recommended Dietary Allowances (RDAs) for individual nutrients, and Acceptable Macronutrient Distribution Ranges (AMDRs) for certain groups of nutrients, such as fats. When there is insufficient evidence to determine an RDA, the institute may publish an Adequate Intake (AI) instead, which has a similar meaning but is less certain. The AI for α-linolenic acid is 1.6 grams/day for men and 1.1 grams/day for women, while the AMDR is 0.6% to 1.2% of total energy. Because the physiological potency of EPA and DHA is much greater than that of ALA, it is not possible to estimate one AMDR for all omega−3 fatty acids. Approximately 10 percent of the AMDR can be consumed as EPA and/or DHA. The Institute of Medicine has not established a RDA or AI for EPA, DHA or the combination, so there is no Daily Value (DVs are derived from RDAs), no labeling of foods or supplements as providing a DV percentage of these fatty acids per serving, and no labeling a food or supplement as an excellent source, or "High in..." As for safety, there was insufficient evidence as of 2005 to set an upper tolerable limit for omega−3 fatty acids, although the FDA has advised that adults can safely consume up to a total of 3 grams per day of combined DHA and EPA, with no more than 2 g from dietary supplements. The European Commission sponsored a working group to develop recommendations on dietary fat intake in pregnancy and lactation. In 2008, the working group published consensus recommendations, including the following: "pregnant and lactating women should aim to achieve an average dietary intake of at least 200 mg DHA/day" "women of childbearing age should aim to consume one to two portions of sea fish per week, including oily fish" "intake of the DHA precursor, α-linolenic acid, is far less effective with regard to DHA deposition in fetal brain than preformed DHA" However, the seafood supply to meet these recommendations is currently too low in most European countries and if met would be unsustainable. In the EU, the EFSA publishes the Dietary Reference Values (DRVs), recommending Adequate Intake values for EPA+DHA and DHA: AI, Adequate Intake i.e. the second half of the first year of life (from the beginning of the 7th month to the 1st birthday) in addition to combined intakes of EPA and DHA of 250 mg/day The American Heart Association (AHA) has made recommendations for EPA and DHA due to their cardiovascular benefits: individuals with no history of coronary heart disease or myocardial infarction should consume oily fish two times per week; and "Treatment is reasonable" for those having been diagnosed with coronary heart disease. For the latter the AHA does not recommend a specific amount of EPA + DHA, although it notes that most trials were at or close to 1000 mg/day. The benefit appears to be on the order of a 9% decrease in relative risk. The European Food Safety Authority (EFSA) approved a claim "EPA and DHA contributes to the normal function of the heart" for products that contain at least 250 mg EPA + DHA. The report did not address the issue of people with pre-existing heart disease. The World Health Organization recommends regular fish consumption (1-2 servings per week, equivalent to 200 to 500 mg/day EPA + DHA) as protective against coronary heart disease and ischaemic stroke. Contamination Heavy metal poisoning from consuming fish oil supplements is highly unlikely, because heavy metals (mercury, lead, nickel, arsenic, and cadmium) selectively bind with protein in the fish flesh rather than accumulate in the oil. However, other contaminants (PCBs, furans, dioxins, and PBDEs) might be found, especially in less-refined fish oil supplements. Throughout their history, the Council for Responsible Nutrition and the World Health Organization have published acceptability standards regarding contaminants in fish oil. The most stringent current standard is the International Fish Oils Standard. Fish oils that are molecularly distilled under vacuum typically make this highest-grade; levels of contaminants are stated in parts per billion per trillion. Rancidity A 2022 study found that a number of products on the market used oxidised oils, with the rancidity often masked by flavourings. Another study in 2015 found that an average of 20% of products had excess oxidation. Whether rancid fish oil is harmful remains unclear. Some studies show that highly oxidised fish oil can have a negative impact on cholesterol levels. Animal testing showed that high doses have toxic effects. Furthermore, rancid oil is likely to be less effective than fresh fish oil. Fish The most widely available dietary source of EPA and DHA is oily fish, such as salmon, herring, mackerel, anchovies, and sardines. Oils from these fishes have around seven times as much omega−3 as omega−6. Other oily fish, such as tuna, also contain n−3 in somewhat lesser amounts. Although fish are a dietary source of omega−3 fatty acids, fish do not synthesize omega−3 fatty acids, but rather obtain them via their food supply, including algae or plankton. In order for farmed marine fish to have amounts of EPA and DHA comparable to those of wild-caught fish, their feed must be supplemented with EPA and DHA, most commonly in the form of fish oil. For this reason, 81% of the global fish oil supply in 2009 was consumed by aquaculture. By 2019, two alternative sources of EPA and DHA for fish have been partially commercialized: genetically modified canola oil and Schizochytrium algal oil. Fish oil Marine and freshwater fish oil vary in content of arachidonic acid, EPA and DHA. They also differ in their effects on organ lipids. Not all forms of fish oil may be equally digestible. Of four studies that compare bioavailability of the glyceryl ester form of fish oil vs. the ethyl ester form, two have concluded the natural glyceryl ester form is better, and the other two studies did not find a significant difference. No studies have shown the ethyl ester form to be superior, although it is cheaper to manufacture. Krill Krill oil is a source of omega−3 fatty acids. The effect of krill oil, at a lower dose of EPA + DHA (62.8%), was demonstrated to be similar to that of fish oil on blood lipid levels and markers of inflammation in healthy humans. While not an endangered species, krill are a mainstay of the diets of many ocean-based species including whales, causing environmental and scientific concerns about their sustainability. Preliminary studies indicate that the DHA and EPA omega−3 fatty acids found in krill oil are more bio-available than in fish oil. Additionally, krill oil contains astaxanthin, a marine-source keto-carotenoid antioxidant that may act synergistically with EPA and DHA. Plant sources Linseed (or flaxseed) (Linum usitatissimum) and its oil are perhaps the most widely available botanical source of the omega−3 fatty acid ALA. Flaxseed oil consists of approximately 55% ALA, which makes it six times richer than most fish oils in omega−3 fatty acids. A portion of this is converted by the body to EPA and DHA, though the actual converted percentage may differ between men and women. The longer-chain EPA and DHA are only naturally made by marine algae and phytoplankton. The microalgae Crypthecodinium cohnii and Schizochytrium are rich sources of DHA, but not EPA, and can be produced commercially in bioreactors for use as food additives. Oil from brown algae (kelp) is a source of EPA. The alga Nannochloropsis also has high levels of EPA. Some transgenic initiatives have transferred the ability to make EPA and DHA into existing high-yielding crop species of land plants: Camelina sativa: In 2013, Rothamsted Research reported two genetically modified forms of this plant. Oil from the seeds of this plant contained on average 15% ALA, 11% EPA, and 8% DHA in one development and 11% ALA and 24% EPA in another. Canola: In 2011, CSIRO, GRDC, and Nufarm developed a version of canola that produces DHA in seeds; the oil contains 10% DHA and almost no EPA. In 2018, it was approved as an animal feed additive in Australia. In 2021, the US FDA acknowledged it as a New Dietary Ingredient for humans. Separately, Cargill has commercialized a different strain of canola that produces EPA and DHA for fish feed. The oil contains 8.1% EPA and 0.8% DHA. Eggs Eggs produced by hens fed a diet of greens and insects contain higher levels of omega−3 fatty acids than those produced by chickens fed corn or soybeans. In addition to feeding chickens insects and greens, fish oils may be added to their diets to increase the omega−3 fatty acid concentrations in eggs. The addition of flax and canola seeds, both good sources of alpha-linolenic acid, to the diets of laying chickens, increases the omega−3 content of the eggs, predominantly DHA. However, this enrichment could lead to an increment of lipid oxidation in the eggs if the seeds are used in higher doses, without using an appropriate antioxidant. The addition of green algae or seaweed to the diets boosts the content of DHA and EPA, which are the forms of omega−3 approved by the FDA for medical claims. A common consumer complaint is "Omega−3 eggs can sometimes have a fishy taste if the hens are fed marine oils". Meat Omega−3 fatty acids are formed in the chloroplasts of green leaves and algae. While seaweeds and algae are the sources of omega−3 fatty acids present in fish, grass is the source of omega−3 fatty acids present in grass-fed animals. When cattle are taken off omega−3 fatty acid-rich grass and shipped to a feedlot to be fattened on omega−3 fatty acid deficient grain, they begin losing their store of this beneficial fat. Each day that an animal spends in the feedlot, the amount of omega−3 fatty acids in its meat is diminished. The omega−6:omega−3 ratio of grass-fed beef is about 2:1, making it a more useful source of omega−3 than grain-fed beef, which usually has a ratio of 4:1. In a 2009 joint study by the USDA and researchers at Clemson University in South Carolina, grass-fed beef was compared with grain-finished beef. The researchers found that grass-finished beef is higher in moisture content, 42.5% lower total lipid content, 54% lower in total fatty acids, 54% higher in beta-carotene, 288% higher in vitamin E (alpha-tocopherol), higher in the B-vitamins thiamin and riboflavin, higher in the minerals calcium, magnesium, and potassium, 193% higher in total omega−3s, 117% higher in CLA (cis-9, trans-11 octadecenoic acid, a conjugated linoleic acid, which is a potential cancer fighter), 90% higher in vaccenic acid (which can be transformed into CLA), lower in the saturated fats, and has a healthier ratio of omega−6 to omega−3 fatty acids (1.65 vs 4.84). Protein and cholesterol content were equal. The omega−3 content of chicken meat may be enhanced by increasing the animals' dietary intake of grains high in omega−3, such as flax, chia, and canola. Kangaroo meat is also a source of omega−3, with fillet and steak containing 74 mg per 100 g of raw meat. Seal oil Seal oil is a source of EPA, DPA, and DHA, and is commonly used in Arctic regions. According to Health Canada, it helps to support the development of the brain, eyes, and nerves in children up to 12 years of age. Like all seal products, it is not allowed to be imported into the European Union. A Canadian company, FeelGood Natural Health, pleaded guilty in 2023 to illegally selling seal oil capsules to American consumers. The company sold over 900 bottles of the capsules, worth over $10,000. Seal oil is made from the blubber of dead seals, and is illegal to sell in the United States under the Marine Mammal Protection Act. The global population of harp seals stands at around 7 million, and they have been hunted in Canada for thousands of years. FeelGood was sentenced to pay a fine of $20,000 and three years of probation. Other sources A trend in the early 21st century was to fortify food with omega−3 fatty acids. Health effects of omega−3 supplementation The association between supplementation and a lower risk of all-cause mortality is inconclusive. Cancer There is insufficient evidence that supplementation with omega−3 fatty acids has an effect on different cancers. Omega−3 supplements do not improve body weight, muscle maintenance or quality of life in cancer patients. Cardiovascular disease Moderate and high quality evidence from a 2020 review showed that EPA and DHA, such as that found in omega−3 polyunsaturated fatty acid supplements, does not appear to improve mortality or cardiovascular health. There is weak evidence indicating that α-linolenic acid may be associated with a small reduction in the risk of a cardiovascular event or the risk of arrhythmia. A 2018 meta-analysis found no support that daily intake of one gram of omega−3 fatty acid in individuals with a history of coronary heart disease prevents fatal coronary heart disease, nonfatal myocardial infarction or any other vascular event. However, omega−3 fatty acid supplementation greater than one gram daily for at least a year may be protective against cardiac death, sudden death, and myocardial infarction in people who have a history of cardiovascular disease. No protective effect against the development of stroke or all-cause mortality was seen in this population. A 2021 meta-analysis found that supplementation was associated with a reduced risk of myocardial infarction and coronary heart disease. Fish oil supplementation has not been shown to benefit revascularization or abnormal heart rhythms and has no effect on heart failure hospital admission rates. Furthermore, fish oil supplement studies have failed to support claims of preventing heart attacks or strokes. In the EU, a review by the European Medicines Agency of omega−3 fatty acid medicines containing a combination of an ethyl ester of eicosapentaenoic acid and docosahexaenoic acid at a dose of 1 g per day concluded that these medicines are not effective in secondary prevention of heart problems in people who have had a myocardial infarction. Evidence suggests that omega−3 fatty acids modestly lower blood pressure (systolic and diastolic) in people with hypertension and in people with normal blood pressure. Omega−3 fatty acids can also reduce heart rate, an emerging risk factor. Some evidence suggests that people with certain circulatory problems, such as varicose veins, may benefit from the consumption of EPA and DHA, which may stimulate blood circulation and increase the breakdown of fibrin, a protein involved in blood clotting and scar formation. Omega−3 fatty acids reduce blood triglyceride levels, but do not significantly change the level of LDL cholesterol or HDL cholesterol. The American Heart Association position (2011) is that borderline elevated triglycerides, defined as 150–199 mg/dL, can be lowered by 0.5–1.0 grams of EPA and DHA per day; high triglycerides 200–499 mg/dL benefit from 1–2 g/day; and >500 mg/dL be treated under a physician's supervision with 2–4 g/day using a prescription product. In this population, omega−3 fatty acid supplementation decreases the risk of heart disease by about 25%. A 2019 review found that omega−3 fatty acid supplements make little or no difference to cardiovascular mortality and that people with myocardial infarction have no benefit in taking the supplements. A 2021 review found that omega−3 supplementation did not affect cardiovascular disease outcomes. A 2021 review concluded that use of omega−3 supplements was associated with an increased risk of atrial fibrillation in people having high blood triglycerides. A meta-analysis showed that use of marine omega−3 supplementation was associated with an increased risk of atrial fibrillation, with the risk appearing to increase for doses greater than one gram per day. Chronic kidney disease In people with chronic kidney disease (CKD) who require hemodialysis, vascular blockage due to clotting may prevent dialysis therapy. Omega−3 fatty acids contribute to the production of eicosanoid molecules that reduce clotting. However, a Cochrane review in 2018 did not find clear evidence that omega−3 supplementation has any impact on the prevention of vascular blockage in people with CKD. There was also moderate certainty that supplementation did not prevent hospitalisation or death within a 12-month period. Stroke A 2022 Cochrane review of controlled trials did not find clear evidence that marine-derived omega−3 supplementation improves cognitive and physical recovery or social, and emotional wellbeing following stroke diagnosis, nor prevents stroke recurrence and mortality. In this review, mood appeared to worsen slightly among those receiving 3g fish oil supplementation for 12 weeks; psychometric scores changed by 1.41 (0.07 to 2.75) points less than those receiving palm and soy oil. However, this represented only a single small study and was not observed in a study lasting more than 3 months. Overall, the review was limited by the low number of high-quality evidence available. Inflammation A 2013 systematic review found tentative evidence of benefit for lowering inflammation levels in healthy adults and in people with one or more biomarkers of metabolic syndrome. Consumption of omega−3 fatty acids from marine sources lowers blood markers of inflammation such as C-reactive protein, interleukin 6, and TNF alpha. For rheumatoid arthritis, one systematic review found consistent but modest evidence for the effect of marine n−3 PUFAs on symptoms such as "joint swelling and pain, duration of morning stiffness, global assessments of pain and disease activity" as well as the use of non-steroidal anti-inflammatory drugs. The American College of Rheumatology has stated that there may be modest benefit from the use of fish oils, but that it may take months for effects to be seen, and cautions for possible gastrointestinal side effects and the possibility of the supplements containing mercury or vitamin A at toxic levels. The National Center for Complementary and Integrative Health has concluded that "supplements containing omega−3 fatty acids... may help relieve rheumatoid arthritis symptoms" but warns that such supplements "may interact with drugs that affect blood clotting". Developmental disabilities One meta-analysis concluded that omega−3 fatty acid supplementation demonstrated a modest effect for improving ADHD symptoms. A Cochrane review of PUFA (not necessarily omega−3) supplementation found "there is little evidence that PUFA supplementation provides any benefit for the symptoms of ADHD in children and adolescents", while a different review found "insufficient evidence to draw any conclusion about the use of PUFAs for children with specific learning disorders". Another review concluded that the evidence is inconclusive for the use of omega−3 fatty acids in behavior and non-neurodegenerative neuropsychiatric disorders such as ADHD and depression. A 2015 meta-analysis of the effect of omega−3 supplementation during pregnancy did not demonstrate a decrease in the rate of preterm birth or improve outcomes in women with singleton pregnancies with no prior preterm births. A 2018 Cochrane systematic review with moderate to high quality of evidence suggested that omega−3 fatty acids may reduce risk of perinatal death, risk of low body weight babies; and possibly mildly increased LGA babies. A 2021 umbrella review with moderate to high quality of evidence suggested that "omega-3 supplementation during pregnancy can exert favorable effects against pre-eclampsia, low-birth weight, pre-term delivery, and post-partum depression, and can improve anthropometric measures, immune system, and visual activity in infants and cardiometabolic risk factors in pregnant mothers." Mental health Omega−3 supplementation has not been shown to significantly affect symptoms of anxiety, major depressive disorder or schizophrenia. A 2021 Cochrane review concluded that there is not "sufficient high‐certainty evidence to determine the effects of n‐3PUFAs as a treatment for MDD". Omega−3 fatty acids have also been investigated as an add-on for the treatment of depression associated with bipolar disorder although there is limited data available. Two reviews have suggested that omega−3 fatty acid supplementation significantly improves depressive symptoms in perinatal women. A 2015 study concluded that there are multiple factors responsible for depression and deficiency of omega−3 fatty acids can be one of them. It further stated that only those patients who have depression due to insufficient omega−3 fatty acids can respond well to the omega−3 supplements while others are unlikely to get any positive effects. Meta-analysis suggest that supplements with higher concentration of EPA than DHA are more likely to act as anti-depressants. In contrast to dietary supplementation studies, there is significant difficulty in interpreting the literature regarding dietary intake of omega−3 fatty acids (e.g. from fish) due to participant recall and systematic differences in diets. There is also controversy as to the efficacy of omega−3, with many meta-analysis papers finding heterogeneity among results which can be explained mostly by publication bias. A significant correlation between shorter treatment trials was associated with increased omega−3 efficacy for treating depressed symptoms further implicating bias in publication. Cognitive aging A 2016 Cochrane review found no convincing evidence for the use of omega‐3 PUFA supplements in treatment of Alzheimer's disease or dementia. There is preliminary evidence of effect on mild cognitive problems, but none supporting an effect in healthy people or those with dementia. A 2020 review suggested that omega−3 supplementation has no effect on global cognitive function but has a mild benefit in improving memory in non-demented adults. A 2022 review found promising evidence for prevention of cognitive decline in people who regularly eat long-chain omega−3 rich foods. Conversely, clinical trials with participants already diagnosed with Alzheimer's show no effect. A 2020 review concluded that long-chain omega−3 supplements do not deter cognitive decline in older adults. Brain and visual functions Brain function and vision rely on dietary intake of DHA to support a broad range of cell membrane properties, particularly in grey matter, which is rich in membranes. A major structural component of the mammalian brain, DHA is the most abundant omega−3 fatty acid in the brain. Omega−3 PUFA supplementation has no effect on macular degeneration or development of visual loss. Atopic diseases Results of studies investigating the role of LCPUFA supplementation and LCPUFA status in the prevention and therapy of atopic diseases (allergic rhinoconjunctivitis, atopic dermatitis, and allergic asthma) are controversial; therefore, it could not be stated either that the nutritional intake of n−3 fatty acids has a clear preventive or therapeutic role, or that the intake of n-6 fatty acids has a promoting role in the context of atopic diseases. Phenylketonuria People with PKU often have low intake of omega−3 fatty acids, because nutrients rich in omega−3 fatty acids are excluded from their diet due to high protein content. Asthma As of 2015, there was no evidence that taking omega−3 supplements can prevent asthma attacks in children. Diabetes A 2019 review found that omega−3 supplements have no effect on prevention and treatment of type 2 diabetes. A 2021 meta-analysis found that supplementation with omega−3 had positive effects on diabetes biomarkers, such as fasting blood glucose and insulin resistance. Sexual health A 2017 animal study examined the effects of omega−3 supplement on BPF-induced erectile dysfunction. Rats in the treatment group were found to have significantly improved erection quality.
Biology and health sciences
Lipids
Biology
22595
https://en.wikipedia.org/wiki/Ore
Ore
Ore is natural rock or sediment that contains one or more valuable minerals, typically including metals, concentrated above background levels, and that is economically viable to mine and process. The grade of ore refers to the concentration of the desired material it contains. The value of the metals or minerals a rock contains must be weighed against the cost of extraction to determine whether it is of sufficiently high grade to be worth mining and is therefore considered an ore. A complex ore is one containing more than one valuable mineral. Minerals of interest are generally oxides, sulfides, silicates, or native metals such as copper or gold. Ore bodies are formed by a variety of geological processes generally referred to as ore genesis and can be classified based on their deposit type. Ore is extracted from the earth through mining and treated or refined, often via smelting, to extract the valuable metals or minerals. Some ores, depending on their composition, may pose threats to health or surrounding ecosystems. The word ore is of Anglo-Saxon origin, meaning lump of metal. Gangue and tailings In most cases, an ore does not consist entirely of a single mineral, but it is mixed with other valuable minerals and with unwanted or valueless rocks and minerals. The part of an ore that is not economically desirable and that cannot be avoided in mining is known as gangue. The valuable ore minerals are separated from the gangue minerals by froth flotation, gravity concentration, electric or magnetic methods, and other operations known collectively as mineral processing or ore dressing. Mineral processing consists of first liberation, to free the ore from the gangue, and concentration to separate the desired mineral(s) from it. Once processed, the gangue is known as tailings, which are useless but potentially harmful materials produced in great quantity, especially from lower grade deposits. Ore deposits An ore deposit is an economically significant accumulation of minerals within a host rock. This is distinct from a mineral resource in that it is a mineral deposit occurring in high enough concentration to be economically viable. An ore deposit is one occurrence of a particular ore type. Most ore deposits are named according to their location, or after a discoverer (e.g. the Kambalda nickel shoots are named after drillers), or after some whimsy, a historical figure, a prominent person, a city or town from which the owner came, something from mythology (such as the name of a god or goddess) or the code name of the resource company which found it (e.g. MKD-5 was the in-house name for the Mount Keith nickel sulphide deposit). Classification Ore deposits are classified according to various criteria developed via the study of economic geology, or ore genesis. The following is a general categorization of the main ore deposit types: Magmatic deposits Magmatic deposits are ones who originate directly from magma Pegmatites are very coarse grained, igneous rocks. They crystallize slowly at great depth beneath the surface, leading to their very large crystal sizes. Most are of granitic composition. They are a large source of industrial minerals such as quartz, feldspar, spodumene, petalite, and rare lithophile elements. Carbonatites are an igneous rock whose volume is made up of over 50% carbonate minerals. They are produced from mantle derived magmas, typically at continental rift zones. They contain more rare earth elements than any other igneous rock, and as such are a major source of light rare earth elements. Magmatic Sulfide Deposits form from mantle melts which rise upwards, and gain sulfur through interaction with the crust. This causes the sulfide minerals present to be immiscible, precipitating out when the melt crystallizes. Magmatic sulfide deposits can be subdivided into two groups by their dominant ore element: Ni-Cu, found in komatiites, anorthosite complexes, and flood basalts. This also includes the Sudbury Nickel Basin, the only known astrobleme source of such ore. Platinum Group Elements (PGE) from large mafic intrusions and tholeiitic rock. Stratiform Chromites are strongly linked to PGE magmatic sulfide deposits. These highly mafic intrusions are a source of chromite, the only chromium ore. They are so named due to their strata-like shape and formation via layered magmatic injection into the host rock. Chromium is usually located within the bottom of the intrusion. They are typically found within intrusions in continental cratons, the most famous example being the Bushveld Complex in South Africa. Podiform Chromitites are found in ultramafic oceanic rocks resulting from complex magma mixing. They are hosted in serpentine and dunite rich layers and are another source of chromite. Kimberlites are a primary source for diamonds. They originate from depths of 150 km in the mantle and are mostly composed of crustal xenocrysts, high amounts of magnesium, other trace elements, gases, and in some cases diamond. Metamorphic deposits These are ore deposits which form as a direct result of metamorphism. Skarns occur in numerous geologic settings worldwide. They are silicates derived from the recrystallization of carbonates like limestone through contact or regional metamorphism, or fluid related metasomatic events. Not all are economic, but those with potential value are classified depending on the dominant element such as Ca, Fe, Mg, or Mn among many others. They are one of the most diverse and abundant mineral deposits. As such they are classified solely by their common mineralogy, mainly garnets and pyroxenes. Greisens, like skarns, are a metamorphosed silicate, quartz-mica mineral deposit. Formed from a granitic protolith due to alteration by intruding magmas, they are large ore sources of tin and tungsten in the form of wolframite, cassiterite, stannite and scheelite. Porphyry copper deposits These are the leading source of copper ore. Porphyry copper deposits form along convergent boundaries and are thought to originate from the partial melting of subducted oceanic plates and subsequent concentration of Cu, driven by oxidation. These are large, round, disseminated deposits containing on average 0.8% copper by weight. Hydrothermal Hydrothermal deposits are a large source of ore. They form as a result of the precipitation of dissolved ore constituents out of fluids. Mississippi Valley-Type (MVT) deposits precipitate from relatively cool, basal brinal fluids within carbonate strata. These are sources of lead and zinc sulphide ore. Sediment-Hosted Stratiform Copper Deposits (SSC) form when copper sulphides precipitate out of brinal fluids into sedimentary basins near the equator. These are the second most common source of copper ore after porphyry copper deposits, supplying 20% of the worlds copper in addition to silver and cobalt. Volcanogenic massive sulphide (VMS) deposits form on the seafloor from precipitation of metal rich solutions, typically associated with hydrothermal activity. They take the general form of a large sulphide rich mound above disseminated sulphides and viens. VMS deposits are a major source of zinc (Zn), copper (Cu), lead (Pb), silver (Ag), and gold (Au). Sedimentary exhalative sulphide deposits (SEDEX) are a copper sulphide ore which form in the same manor as VMS from metal rich brine but are hosted within sedimentary rocks and are not directly related to volcanism. Orogenic gold deposits are a bulk source for gold, with 75% of gold production originating from orogenic gold deposits. Formation occurs during late stage mountain building (see orogeny) where metamorphism forces gold containing fluids into joints and fractures where they precipitate. These tend to be strongly correlated with quartz veins. Epithermal vein deposits form in the shallow crust from concentration of metal bearing fluids into veins and stockworks where conditions favour precipitation. These volcanic related deposits are a source of gold and silver ore, the primary precipitants. Sedimentary deposits Laterites form from the weathering of highly mafic rock near the equator. They can form in as little as one million years and are a source of iron (Fe), manganese (Mn), and aluminum (Al). They may also be a source of nickel and cobalt when the parent rock is enriched in these elements. Banded iron formations (BIFs) are the highest concentration of any single metal available. They are composed of chert beds alternating between high and low iron concentrations. Their deposition occurred early in Earth's history when the atmospheric composition was significantly different from today. Iron rich water is thought to have upwelled where it oxidized to Fe (III) in the presence of early photosynthetic plankton producing oxygen. This iron then precipitated out and deposited on the ocean floor. The banding is thought to be a result of changing plankton population. Sediment Hosted Copper forms from the precipitation of a copper rich oxidized brine into sedimentary rocks. These are a source of copper primarily in the form of copper-sulfide minerals. Placer deposits are the result of weathering, transport, and subsequent concentration of a valuable mineral via water or wind. They are typically sources of gold (Au), platinum group elements (PGE), sulfide minerals, tin (Sn), tungsten (W), and rare-earth elements (REEs). A placer deposit is considered alluvial if formed via river, colluvial if by gravity, and eluvial when close to their parent rock. Manganese nodules Polymetallic nodules, also called manganese nodules, are mineral concretions on the sea floor formed of concentric layers of iron and manganese hydroxides around a core. They are formed by a combination of diagenetic and sedimentary precipitation at the estimated rate of about a centimeter over several million years. The average diameter of a polymetallic nodule is between 3 and 10 cm (1 and 4 in) in diameter and are characterized by enrichment in iron, manganese, heavy metals, and rare earth element content when compared to the Earth's crust and surrounding sediment. The proposed mining of these nodules via remotely operated ocean floor trawling robots has raised a number of ecological concerns. Extraction The extraction of ore deposits generally follows these steps. Progression from stages 1–3 will see a continuous disqualification of potential ore bodies as more information is obtained on their viability: Prospecting to find where an ore is located. The prospecting stage generally involves mapping, geophysical survey techniques (aerial and/or ground-based surveys), geochemical sampling, and preliminary drilling. After a deposit is discovered, exploration is conducted to define its extent and value via further mapping and sampling techniques such as targeted diamond drilling to intersect the potential ore body. This exploration stage determines ore grade, tonnage, and if the deposit is a viable economic resource. A feasibility study then considers the theoretical implications of the potential mining operation in order to determine if it should move ahead with development. This includes evaluating the economically recoverable portion of the deposit, marketability and payability of the ore concentrates, engineering, milling and infrastructure costs, finance and equity requirements, potential environmental impacts, political implications, and a cradle to grave analysis from the initial excavation all the way through to reclamation. Multiple experts from differing fields must then approve the study before the project can move on to the next stage. Depending on the size of the project, a pre-feasibility study is sometimes first performed to decide preliminary potential and if a much costlier full feasibility study is even warranted. Development begins once an ore body has been confirmed economically viable and involves steps to prepare for its extraction such as building of a mine plant and equipment. Production can then begin and is the operation of the mine in an active sense. The time a mine is active is dependent on its remaining reserves and profitability. The extraction method used is entirely dependent on the deposit type, geometry, and surrounding geology. Methods can be generally categorized into surface mining such as open pit or strip mining, and underground mining such as block caving, cut and fill, and stoping. Reclamation, once the mine is no longer operational, makes the land where a mine had been suitable for future use. With rates of ore discovery in a steady decline since the mid 20th century, it is thought that most surface level, easily accessible sources have been exhausted. This means progressively lower grade deposits must be turned to, and new methods of extraction must be developed. Hazards Some ores contain heavy metals, toxins, radioactive isotopes and other potentially negative compounds which may pose a risk to the environment or health. The exact effects an ore and its tailings have is dependent on the minerals present. Tailings of particular concern are those of older mines, as containment and remediation methods in the past were next to non-existent, leading to high levels of leaching into the surrounding environment. Mercury and arsenic are two ore related elements of particular concern. Additional elements found in ore which may have adverse health affects in organisms include iron, lead, uranium, zinc, silicon, titanium, sulfur, nitrogen, platinum, and chromium. Exposure to these elements may result in respiratory and cardiovascular problems and neurological issues. These are of particular danger to aquatic life if dissolved in water. Ores such as those of sulphide minerals may severely increase the acidity of their immediate surroundings and of water, with numerous, long lasting impacts on ecosystems. When water becomes contaminated it may transport these compounds far from the tailings site, greatly increasing the affected range. Uranium ores and those containing other radioactive elements may pose a significant threat if leaving occurs and isotope concentration increases above background levels. Radiation can have severe, long lasting environmental impacts and cause irreversible damage to living organisms. History Metallurgy began with the direct working of native metals such as gold, lead and copper. Placer deposits, for example, would have been the first source of native gold. The first exploited ores were copper oxides such as malachite and azurite, over 7000 years ago at Çatalhöyük . These were the easiest to work, with relatively limited mining and basic requirements for smelting. It is believed they were once much more abundant on the surface than today. After this, copper sulphides would have been turned to as oxide resources depleted and the Bronze Age progressed. Lead production from galena smelting may have been occurring at this time as well. The smelting of arsenic-copper sulphides would have produced the first bronze alloys. The majority of bronze creation however required tin, and thus the exploitation of cassiterite, the main tin source, began. Some 3000 years ago, the smelting of iron ores began in Mesopotamia. Iron oxide is quite abundant on the surface and forms from a variety of processes. Until the 18th century gold, copper, lead, iron, silver, tin, arsenic and mercury were the only metals mined and used. In recent decades, Rare Earth Elements have been increasingly exploited for various high-tech applications. This has led to an ever-growing search for REE ore and novel ways of extracting said elements. Trade Ores (metals) are traded internationally and comprise a sizeable portion of international trade in raw materials both in value and volume. This is because the worldwide distribution of ores is unequal and dislocated from locations of peak demand and from smelting infrastructure. Most base metals (copper, lead, zinc, nickel) are traded internationally on the London Metal Exchange, with smaller stockpiles and metals exchanges monitored by the COMEX and NYMEX exchanges in the United States and the Shanghai Futures Exchange in China. The global Chromium market is currently dominated by the United States and China. Iron ore is traded between customer and producer, though various benchmark prices are set quarterly between the major mining conglomerates and the major consumers, and this sets the stage for smaller participants. Other, lesser, commodities do not have international clearing houses and benchmark prices, with most prices negotiated between suppliers and customers one-on-one. This generally makes determining the price of ores of this nature opaque and difficult. Such metals include lithium, niobium-tantalum, bismuth, antimony and rare earths. Most of these commodities are also dominated by one or two major suppliers with >60% of the world's reserves. China is currently leading in world production of Rare Earth Elements. The World Bank reports that China was the top importer of ores and metals in 2005 followed by the US and Japan. Important ore minerals For detailed petrographic descriptions of ore minerals see Tables for the Determination of Common Opaque Minerals by Spry and Gedlinske (1987). Below are the major economic ore minerals and their deposits, grouped by primary elements.
Physical sciences
Petrology
null
22649
https://en.wikipedia.org/wiki/Observation
Observation
Observation in the natural sciences is an act or instance of noticing or perceiving and the acquisition of information from a primary source. In living beings, observation employs the senses. In science, observation can also involve the perception and recording of data via the use of scientific instruments. The term may also refer to any data collected during the scientific activity. Observations can be qualitative, that is, the absence or presence of a property is noted and the observed phenomenon described, or quantitative if a numerical value is attached to the observed phenomenon by counting or measuring. Science The scientific method requires observations of natural phenomena to formulate and test hypotheses. It consists of the following steps: Ask a question about a phenomenon Make observations of the phenomenon Formulate a hypothesis that tentatively answers the question Predict logical, observable consequences of the hypothesis that have not yet been investigated Test the hypothesis' predictions by an experiment, observational study, field study, or simulation Draw a conclusion from data gathered in the experiment, or revise the hypothesis or form a new one and repeat the process Write a descriptive method of observation and the results or conclusions reached Have peers with experience researching the same phenomenon evaluate the results Observations play a role in the second and fifth steps of the scientific method. However, the need for reproducibility requires that observations by different observers can be comparable. Human sense impressions are subjective and qualitative, making them difficult to record or compare. The use of measurement was developed to allow recording and comparison of observations made at different times and places, by different people. The measurement consists of using observation to compare the phenomenon being observed to a standard unit. The standard unit can be an artifact, process, or definition which can be duplicated or shared by all observers. In measurement, the number of standard units which is equal to the observation is counted. Measurement reduces an observation to a number that can be recorded, and two observations which result in the same number are equal within the resolution of the process. Human senses are limited and subject to errors in perception, such as optical illusions. Scientific instruments were developed to aid human abilities of observation, such as weighing scales, clocks, telescopes, microscopes, thermometers, cameras, and tape recorders, and also translate into perceptible form events that are unobservable by the senses, such as indicator dyes, voltmeters, spectrometers, infrared cameras, oscilloscopes, interferometers, Geiger counters, and radio receivers. One problem encountered throughout scientific fields is that the observation may affect the process being observed, resulting in a different outcome than if the process was unobserved. This is called the observer effect. For example, it is not normally possible to check the air pressure in an automobile tire without letting out some of the air, thereby changing the pressure. However, in most fields of science, it is possible to reduce the effects of observation to insignificance by using better instruments. Considered as a physical process itself, all forms of observation (human or instrumental) involve amplification and are thus thermodynamically irreversible processes, increasing entropy. Paradoxes In some specific fields of science, the results of observation differ depending on factors that are not important in everyday observation. These are usually illustrated with apparent "paradoxes" in which an event appears different when observed from two different points of view, seeming to violate "common sense". Relativity: In relativistic physics which deals with velocities close to the speed of light, it is found that different observers may observe different values for the length, time rates, mass, and many other properties of an object, depending on the observer's velocity relative to the object. For example, in the twin paradox one twin goes on a trip near the speed of light and comes home younger than the twin who stayed at home. This is not a paradox: time passes at a slower rate when measured from a frame moving concerning the object. In relativistic physics, an observation must always be qualified by specifying the state of motion of the observer, its reference frame. Quantum mechanics: In quantum mechanics, which deals with the behavior of very small objects, it is not possible to observe a system without changing the system, and the "observer" must be considered part of the system being observed. In isolation, quantum objects are represented by a wave function which often exists in a superposition or mixture of different states. However, when an observation is made to determine the actual location or state of the object, it always finds the object in a single state, not a "mixture". The interaction of the observation process appears to "collapse" the wave function into a single state. So any interaction between an isolated wave function and the external world that results in this wave function collapse is called an observation or measurement, whether or not it is part of a deliberate observation process. Biases The human senses do not function like a video camcorder, impartially recording all observations. Human perception occurs by a complex, unconscious process of abstraction, in which certain details of the incoming sense data are noticed and remembered, and the rest is forgotten. What is kept and what is thrown away depends on an internal model or representation of the world, called by psychologists a schema, that is built up over our entire lives. The data is fitted into this schema. Later when events are remembered, memory gaps may even be filled by "plausible" data the mind makes up to fit the model; this is called reconstructive memory. How much attention the various perceived data are given depends on an internal value system, which judges how important it is to the individual. Thus two people can view the same event and come away with very different perceptions of it, even disagreeing about simple facts. This is why eyewitness testimony is notoriously unreliable. Correct scientific technique emphasizes careful recording of observations, separating experimental observations from the conclusions drawn from them, and techniques such as blind or double blind experiments, to minimize observational bias. Several of the more important ways observations can be affected by human psychology are given below. Streetlight effect Confirmation bias Human observations are biased toward confirming the observer's conscious and unconscious expectations and view of the world; we "see what we expect to see". In psychology, this is called confirmation bias. Since the object of scientific research is the discovery of new phenomena, this bias can and has caused new discoveries to be overlooked; one example is the discovery of x-rays. It can also result in erroneous scientific support for widely held cultural myths, on the other hand, as in the scientific racism that supported ideas of racial superiority in the early 20th century. Processing bias Modern scientific instruments can extensively process "observations" before they are presented to the human senses, and particularly with computerized instruments, there is sometimes a question as to where in the data processing chain "observing" ends and "drawing conclusions" begins. This has recently become an issue with digitally enhanced images published as experimental data in papers in scientific journals. The images are enhanced to bring out features that the researcher wants to emphasize, but this also has the effect of supporting the researcher's conclusions. This is a form of bias that is difficult to quantify. Some scientific journals have begun to set detailed standards for what types of image processing are allowed in research results. Computerized instruments often keep a copy of the "raw data" from sensors before processing, which is the ultimate defense against processing bias, and similarly, scientific standards require preservation of the original unenhanced "raw" versions of images used as research data.
Physical sciences
Basics
null
22654
https://en.wikipedia.org/wiki/Ohio-class%20submarine
Ohio-class submarine
The Ohio class of nuclear-powered submarines includes the United States Navy's 14 ballistic missile submarines (SSBNs) and its four cruise missile submarines (SSGNs). Each displacing 18,750 tons submerged, the Ohio-class boats are the largest submarines ever built for the U.S. Navy. They are also the third-largest submarines ever built, behind the Russian Navy's Soviet era 48,000-ton , the last of which was retired in 2023, and 24,000-ton . Capable of carrying 24 Trident II missiles apiece, the Ohio class are equipped with just as many missiles as, if not more than, either the Borei class (16) or the deactivated Typhoon class (20). Like their predecessors the and es, the Ohio-class SSBNs are part of the United States' nuclear-deterrent triad, along with U.S. Air Force strategic bombers and intercontinental ballistic missiles. The 14 SSBNs together carry about half of U.S. active strategic thermonuclear warheads. Although the Trident missiles have no preset targets when the submarines go on patrol, they can be given targets quickly, from the United States Strategic Command based in Nebraska, using secure and constant radio communications links, including very low frequency systems. All the Ohio-class submarines, except for , are named for U.S. states, which U.S. Navy tradition had previously reserved for battleships and later cruisers. The Ohio class is to be gradually replaced by the beginning in 2031. Description The Ohio-class submarine was designed for extended strategic deterrent patrols. Each submarine is assigned two complete crews, called the Blue crew and the Gold crew, each typically serving 70-to-90-day deterrent patrols. To decrease the time in port for crew turnover and replenishment, three large logistics hatches have been installed to provide large-diameter resupply and repair access. These hatches allow rapid transfer of supply pallets, equipment replacement modules, and machinery components, speeding up replenishment and maintenance of the submarines. Moreover, the "stealth" ability of the submarines was significantly improved over all previous ballistic-missile subs. Ohio was virtually undetectable in her sea trials in 1982, giving the U.S. Navy extremely advanced flexibility. The class's design allows the boat to operate for about 15 years between major overhauls. These submarines are reported to be as quiet at their cruising speed of or more as the previous s at , although exact information remains classified. Fire control for their Mark 48 torpedoes is carried out by Mark 118 Mod 2 system, while the Missile Fire Control system is a Mark 98. The Ohio-class submarines were constructed from sections of hull, with each four-deck section being in diameter. The sections were produced at the General Dynamics Electric Boat facility, Quonset Point, Rhode Island, and then assembled at its shipyard at Groton, Connecticut. The US Navy has a total of 18 Ohio-class submarines which consist of 14 ballistic missile submarines (SSBNs), and four cruise missile submarines (SSGNs). The SSBN submarines provide the sea-based leg of the U.S. nuclear triad. Each SSBN submarine is armed with up to 20 Trident II submarine-launched ballistic missiles (SLBM). Each SSGN is capable of carrying 154 Tomahawk cruise missiles, plus a complement of Harpoon missiles to be fired through their torpedo tubes. History The Ohio class was designed in the 1970s to carry the concurrently designed Trident submarine-launched ballistic missile. The first eight Ohio-class submarines were armed at first with 24 Trident I C4 SLBMs. Beginning with the ninth Trident submarine, , the remaining boats were equipped with the larger, three-stage Trident II D5 missile. The Trident I missile carries eight multiple independently targetable reentry vehicles, while the Trident II missile carries 12, in total delivering more destructive power than the Trident I missile and with greater accuracy. Starting with in 2000, the Navy began converting its remaining ballistic missile submarines armed with C4 missiles to carry D5 missiles. This task was completed in mid-2008. The first eight submarines had their home ports at Bangor, Washington, to replace the submarines carrying Polaris A3 missiles that were then being decommissioned. The remaining 10 submarines originally had their home ports at Kings Bay, Georgia, replacing the Poseidon and Trident Backfit submarines of the Atlantic Fleet. SSBN/SSGN conversions In 1994, the Nuclear Posture Review study determined that, of the 18 Ohio SSBNs the U.S. Navy would be operating in total, 14 would be sufficient for the strategic needs of the U.S. The decision was made to convert four Ohio-class boats into SSGNs capable of conducting conventional land attack and special operations. As a result, the four oldest boats of the class—Ohio, Michigan, Florida, and Georgia—progressively entered the conversion process in late 2002 and were returned to active service by 2008. The boats could thereafter carry 154 Tomahawk cruise missiles and 66 special operations personnel, among other capabilities and upgrades. The cost to refit the four boats was around US$1 billion (2008 dollars) per vessel. During the conversion of these four submarines to SSGNs (see below), five of the remaining submarines, , , , , and , were transferred from Kings Bay to Bangor. Further transfers occur as the strategic weapons goals of the United States change. In 2011, Ohio-class submarines carried out 28 deterrent patrols. Each patrol lasts around 70 days. Four boats are on station ("hard alert") in designated patrol areas at any given time. From January to June 2014, Pennsylvania carried out a 140-day-long patrol, the longest to date. The conversion modified 22 of the 24 diameter Trident missile tubes to contain large vertical launch systems, one configuration of which may be a cluster of seven Tomahawk cruise missiles. In this configuration, the number of cruise missiles carried could be a maximum of 154, the equivalent of what is typically deployed in a surface battle group. Other payload possibilities include new generations of supersonic and hypersonic cruise missiles, and Submarine Launched Intermediate Range Ballistic Missiles, unmanned aerial vehicles, the ADM-160 MALD, sensors for antisubmarine warfare or intelligence, surveillance, and reconnaissance missions, counter mine warfare payloads such as the AN/BLQ-11 Long Term Mine Reconnaissance System, and the broaching universal buoyant launcher and stealthy affordable capsule system specialized payload canisters. The missile tubes also have room for stowage canisters that can extend the forward deployment time for special forces. The other two Trident tubes are converted to swimmer lockout chambers. For special operations, the Dry Combat Submersible (which replaced the Advanced SEAL Delivery System), as well as the dry deck shelter, can be mounted on the lockout chamber and the boat will be able to host up to 66 special-operations sailors or Marines, such as Navy SEALs, or USMC MARSOC teams. Improved communications equipment installed during the upgrade allows the SSGNs to serve as a forward-deployed, clandestine Small Combatant Joint Command Center. On 26 September 2002, the Navy awarded General Dynamics Electric Boat a US$442.9 million contract to begin the first phase of the SSGN submarine conversion program. Those funds covered only the initial phase of conversion for the first two boats on the schedule. Advance procurement was funded at $355 million in fiscal year 2002, $825 million in the FY 2003 budget and, through the five-year defense budget plan, at $936 million in FY 2004, $505 million in FY 2005, and $170 million in FY 2006. Thus, the total cost to refit the four boats is just under $700 million per vessel. In November 2002, Ohio entered a dry-dock, beginning her 36-month refueling and missile-conversion overhaul. Electric Boat announced on 9 January 2006 that the conversion had been completed. The converted Ohio rejoined the fleet in February 2006, followed by Florida in April 2006. The converted Michigan was delivered in November 2006. The converted Ohio went to sea for the first time in October 2007. Georgia returned to the fleet in March 2008 at Kings Bay. These four SSGNs are expected to remain in service until about 2023–2026. At that point, their capabilities will be replaced with Virginia Payload Module-equipped . Missile tube reduction As part of the New START treaty, four tubes on each SSBN were deactivated in 2017, reducing the number of missiles to 20 per boat. Detailed cross-section Boats in class Note: Boats based at Naval Base Kitsap, Washington are operated by the U.S. Pacific Fleet, while boats based at Naval Submarine Base Kings Bay, Georgia are operated by U.S. Fleet Forces Command, (formerly the U.S. Atlantic Fleet). Replacement The U.S. Department of Defense anticipated a continued need for a sea-based strategic nuclear force. The first of the current Ohio-class SSBNs was expected to be retired by 2029, so the replacement submarine would need to be seaworthy by that time. A replacement was expected to cost over $4 billion per unit compared to Ohios $2 billion. The U.S. Navy explored two options. The first option was a variant of the nuclear-powered attack submarines. The second option was a dedicated SSBN, either with a new hull or based on an overhaul of the current Ohio class. With the cooperation of both Electric Boat and Newport News Shipbuilding, in 2007, the U.S. Navy began a cost-control study. Then in December 2008, the U.S. Navy awarded Electric Boat a contract for the missile compartment design of the Ohio-class replacement, worth up to $592 million. Newport News is expected to receive close to 4% of that project. In April 2009, U.S. Defense Secretary Robert M. Gates stated that the U.S. Navy was expected to begin such a program in 2010. The new vessel was scheduled to enter the design phase by 2014. If a new hull design was to be used, the program needed to be initiated by 2016 to meet the 2029 deadline. The Columbia class was officially designated on 14 December 2016, by Secretary of the Navy Ray Mabus, and the lead submarine will be . The Navy wants to procure the first Columbia-class boat in FY2021, though it is not expected to enter service until 2031. In 2020, Navy officials first publicly discussed the idea of extending the lives of select Ohio-class boats at the Naval Submarine League's 2020 conference. During the 2022 conference, Rear Admiral Scott Pappano, the program executive officer for strategic submarines, and Rear Admiral Douglas G. Perry, the director of undersea warfare on the Chief of Naval Operations' staff, discussed the Columbia-class program, and also touched on the possibility of finding Ohio-class boats that had sufficient remaining nuclear fuel and were in good enough material state to be given a further extension to their lives. In popular culture As ballistic-missile submarines, the Ohio class has occasionally been portrayed in fiction books and films. Tom Clancy wrote Ohio-class submarines into several novels, such as in The Sum of All Fears (1991). The fictional USS Montana is featured in the 1989 film The Abyss. is the setting for the 1995 submarine film Crimson Tide. The fictional ballistic missile submarine USS Colorado (SSBN-753) is the primary setting for the ABC television series Last Resort. is featured in Season 1, Episode 13 of the American television series The Brave.
Technology
Naval warfare
null
22669
https://en.wikipedia.org/wiki/Open%20cluster
Open cluster
An open cluster is a type of star cluster made of tens to a few thousand stars that were formed from the same giant molecular cloud and have roughly the same age. More than 1,100 open clusters have been discovered within the Milky Way galaxy, and many more are thought to exist. Each one is loosely bound by mutual gravitational attraction and becomes disrupted by close encounters with other clusters and clouds of gas as they orbit the Galactic Center. This can result in a loss of cluster members through internal close encounters and a dispersion into the main body of the galaxy. Open clusters generally survive for a few hundred million years, with the most massive ones surviving for a few billion years. In contrast, the more massive globular clusters of stars exert a stronger gravitational attraction on their members, and can survive for longer. Open clusters have been found only in spiral and irregular galaxies, in which active star formation is occurring. Young open clusters may be contained within the molecular cloud from which they formed, illuminating it to create an H II region. Over time, radiation pressure from the cluster will disperse the molecular cloud. Typically, about 10% of the mass of a gas cloud will coalesce into stars before radiation pressure drives the rest of the gas away. Open clusters are key objects in the study of stellar evolution. Because the cluster members are of similar age and chemical composition, their properties (such as distance, age, metallicity, extinction, and velocity) are more easily determined than they are for isolated stars. A number of open clusters, such as the Pleiades, the Hyades and the Alpha Persei Cluster, are visible with the naked eye. Some others, such as the Double Cluster, are barely perceptible without instruments, while many more can be seen using binoculars or telescopes. The Wild Duck Cluster, M11, is an example. Historical observations The prominent open cluster the Pleiades, in the constellation Taurus, has been recognized as a group of stars since antiquity, while the Hyades (which also form part of Taurus) is one of the oldest open clusters. Other open clusters were noted by early astronomers as unresolved fuzzy patches of light. In his Almagest, the Roman astronomer Ptolemy mentions the Praesepe cluster, the Double Cluster in Perseus, the Coma Star Cluster and the Ptolemy Cluster, while the Persian astronomer Al-Sufi wrote of the Omicron Velorum cluster. However, it would require the invention of the telescope to resolve these "nebulae" into their constituent stars. Indeed, in 1603 Johann Bayer gave three of these clusters designations as if they were single stars. The first person to use a telescope to observe the night sky and record his observations was the Italian scientist Galileo Galilei in 1609. When he turned the telescope toward some of the nebulous patches recorded by Ptolemy, he found they were not a single star, but groupings of many stars. For Praesepe, he found more than 40 stars. Where previously observers had noted only 6–7 stars in the Pleiades, he found almost 50. In his 1610 treatise Sidereus Nuncius, Galileo Galilei wrote, "the galaxy is nothing else but a mass of innumerable stars planted together in clusters." Influenced by Galileo's work, the Sicilian astronomer Giovanni Hodierna became possibly the first astronomer to use a telescope to find previously undiscovered open clusters. In 1654, he identified the objects now designated Messier 41, Messier 47, NGC 2362 and NGC 2451. It was realized as early as 1767 that the stars in a cluster were physically related, when the English naturalist the Reverend John Michell calculated that the probability of even just one group of stars like the Pleiades being the result of a chance alignment as seen from Earth was just 1 in 496,000. Between 1774 and 1781, French astronomer Charles Messier published a catalogue of celestial objects that had a nebulous appearance similar to comets. This catalogue included 26 open clusters. In the 1790s, English astronomer William Herschel began an extensive study of nebulous celestial objects. He discovered that many of these features could be resolved into groupings of individual stars. Herschel conceived the idea that stars were initially scattered across space, but later became clustered together as star systems because of gravitational attraction. He divided the nebulae into eight classes, with classes VI through VIII being used to classify clusters of stars. The number of clusters known continued to increase under the efforts of astronomers. Hundreds of open clusters were listed in the New General Catalogue, first published in 1888 by the Danish–Irish astronomer J. L. E. Dreyer, and the two supplemental Index Catalogues, published in 1896 and 1905. Telescopic observations revealed two distinct types of clusters, one of which contained thousands of stars in a regular spherical distribution and was found all across the sky but preferentially towards the center of the Milky Way. The other type consisted of a generally sparser population of stars in a more irregular shape. These were generally found in or near the galactic plane of the Milky Way. Astronomers dubbed the former globular clusters, and the latter open clusters. Because of their location, open clusters are occasionally referred to as galactic clusters, a term that was introduced in 1925 by the Swiss-American astronomer Robert Julius Trumpler. Micrometer measurements of the positions of stars in clusters were made as early as 1877 by the German astronomer E. Schönfeld and further pursued by the American astronomer E. E. Barnard prior to his death in 1923. No indication of stellar motion was detected by these efforts. However, in 1918 the Dutch–American astronomer Adriaan van Maanen was able to measure the proper motion of stars in part of the Pleiades cluster by comparing photographic plates taken at different times. As astrometry became more accurate, cluster stars were found to share a common proper motion through space. By comparing the photographic plates of the Pleiades cluster taken in 1918 with images taken in 1943, van Maanen was able to identify those stars that had a proper motion similar to the mean motion of the cluster, and were therefore more likely to be members. Spectroscopic measurements revealed common radial velocities, thus showing that the clusters consist of stars bound together as a group. The first color–magnitude diagrams of open clusters were published by Ejnar Hertzsprung in 1911, giving the plot for the Pleiades and Hyades star clusters. He continued this work on open clusters for the next twenty years. From spectroscopic data, he was able to determine the upper limit of internal motions for open clusters, and could estimate that the total mass of these objects did not exceed several hundred times the mass of the Sun. He demonstrated a relationship between the star colors and their magnitudes, and in 1929 noticed that the Hyades and Praesepe clusters had different stellar populations than the Pleiades. This would subsequently be interpreted as a difference in ages of the three clusters. Formation The formation of an open cluster begins with the collapse of part of a giant molecular cloud, a cold dense cloud of gas and dust containing up to many thousands of times the mass of the Sun. These clouds have densities that vary from 102 to 106 molecules of neutral hydrogen per cm3, with star formation occurring in regions with densities above 104 molecules per cm3. Typically, only 1–10% of the cloud by volume is above the latter density. Prior to collapse, these clouds maintain their mechanical equilibrium through magnetic fields, turbulence and rotation. Many factors may disrupt the equilibrium of a giant molecular cloud, triggering a collapse and initiating the burst of star formation that can result in an open cluster. These include shock waves from a nearby supernova, collisions with other clouds and gravitational interactions. Even without external triggers, regions of the cloud can reach conditions where they become unstable against collapse. The collapsing cloud region will undergo hierarchical fragmentation into ever smaller clumps, including a particularly dense form known as infrared dark clouds, eventually leading to the formation of up to several thousand stars. This star formation begins enshrouded in the collapsing cloud, blocking the protostars from sight but allowing infrared observation. In the Milky Way galaxy, the formation rate of open clusters is estimated to be one every few thousand years. The hottest and most massive of the newly formed stars (known as OB stars) will emit intense ultraviolet radiation, which steadily ionizes the surrounding gas of the giant molecular cloud, forming an H II region. Stellar winds and radiation pressure from the massive stars begins to drive away the hot ionized gas at a velocity matching the speed of sound in the gas. After a few million years the cluster will experience its first core-collapse supernovae, which will also expel gas from the vicinity. In most cases these processes will strip the cluster of gas within ten million years, and no further star formation will take place. Still, about half of the resulting protostellar objects will be left surrounded by circumstellar disks, many of which form accretion disks. As only 30 to 40 percent of the gas in the cloud core forms stars, the process of residual gas expulsion is highly damaging to the star formation process. All clusters thus suffer significant infant weight loss, while a large fraction undergo infant mortality. At this point, the formation of an open cluster will depend on whether the newly formed stars are gravitationally bound to each other; otherwise an unbound stellar association will result. Even when a cluster such as the Pleiades does form, it may hold on to only a third of the original stars, with the remainder becoming unbound once the gas is expelled. The young stars so released from their natal cluster become part of the Galactic field population. Because most if not all stars form in clusters, star clusters are to be viewed as the fundamental building blocks of galaxies. The violent gas-expulsion events that shape and destroy many star clusters at birth leave their imprint in the morphological and kinematical structures of galaxies. Most open clusters form with at least 100 stars and a mass of 50 or more solar masses. The largest clusters can have over 104 solar masses, with the massive cluster Westerlund 1 being estimated at 5 × 104 solar masses and R136 at almost 5 x 105, typical of globular clusters. While open clusters and globular clusters form two fairly distinct groups, there may not be a great deal of intrinsic difference between a very sparse globular cluster such as Palomar 12 and a very rich open cluster. Some astronomers believe the two types of star clusters form via the same basic mechanism, with the difference being that the conditions that allowed the formation of the very rich globular clusters containing hundreds of thousands of stars no longer prevail in the Milky Way. It is common for two or more separate open clusters to form out of the same molecular cloud. In the Large Magellanic Cloud, both Hodge 301 and R136 have formed from the gases of the Tarantula Nebula, while in our own galaxy, tracing back the motion through space of the Hyades and Praesepe, two prominent nearby open clusters, suggests that they formed in the same cloud about 600 million years ago. Sometimes, two clusters born at the same time will form a binary cluster. The best known example in the Milky Way is the Double Cluster of NGC 869 and NGC 884 (also known as h and χ Persei), but at least 10 more double clusters are known to exist. New research indicates the Cepheid-hosting M25 may constitute a ternary star cluster together with NGC 6716 and Collinder 394. Many more binary clusters are known in the Small and Large Magellanic Clouds—they are easier to detect in external systems than in our own galaxy because projection effects can cause unrelated clusters within the Milky Way to appear close to each other. Morphology and classification Open clusters range from very sparse clusters with only a few members to large agglomerations containing thousands of stars. They usually consist of quite a distinct dense core, surrounded by a more diffuse 'corona' of cluster members. The core is typically about 3–4 light years across, with the corona extending to about 20 light years from the cluster center. Typical star densities in the center of a cluster are about 1.5 stars per cubic light year; the stellar density near the Sun is about 0.003 stars per cubic light year. Open clusters are often classified according to a scheme developed by Robert Trumpler in 1930. The Trumpler scheme gives a cluster a three-part designation, with a Roman numeral from I-IV for little to very disparate, an Arabic numeral from 1 to 3 for the range in brightness of members (from small to large range), and p, m or r to indication whether the cluster is poor, medium or rich in stars. An 'n' is appended if the cluster lies within nebulosity. Under the Trumpler scheme, the Pleiades are classified as I3rn, and the nearby Hyades are classified as II3m. Numbers and distribution There are over 1,100 known open clusters in our galaxy, but the true total may be up to ten times higher than that. In spiral galaxies, open clusters are largely found in the spiral arms where gas densities are highest and so most star formation occurs, and clusters usually disperse before they have had time to travel beyond their spiral arm. Open clusters are strongly concentrated close to the galactic plane, with a scale height in our galaxy of about 180 light years, compared with a galactic radius of approximately 50,000 light years. In irregular galaxies, open clusters may be found throughout the galaxy, although their concentration is highest where the gas density is highest. Open clusters are not seen in elliptical galaxies: Star formation ceased many millions of years ago in ellipticals, and so the open clusters which were originally present have long since dispersed. In the Milky Way Galaxy, the distribution of clusters depends on age, with older clusters being preferentially found at greater distances from the Galactic Center, generally at substantial distances above or below the galactic plane. Tidal forces are stronger nearer the center of the galaxy, increasing the rate of disruption of clusters, and also the giant molecular clouds which cause the disruption of clusters are concentrated towards the inner regions of the galaxy, so clusters in the inner regions of the galaxy tend to get dispersed at a younger age than their counterparts in the outer regions. Stellar composition Because open clusters tend to be dispersed before most of their stars reach the end of their lives, the light from them tends to be dominated by the young, hot blue stars. These stars are the most massive, and have the shortest lives, a few tens of millions of years. The older open clusters tend to contain more yellow stars. The frequency of binary star systems has been observed to be higher within open clusters than outside open clusters. This is seen as evidence that single stars get ejected from open clusters due to dynamical interactions. Some open clusters contain hot blue stars which seem to be much younger than the rest of the cluster. These blue stragglers are also observed in globular clusters, and in the very dense cores of globulars they are believed to arise when stars collide, forming a much hotter, more massive star. However, the stellar density in open clusters is much lower than that in globular clusters, and stellar collisions cannot explain the numbers of blue stragglers observed. Instead, it is thought that most of them probably originate when dynamical interactions with other stars cause a binary system to coalesce into one star. Once they have exhausted their supply of hydrogen through nuclear fusion, medium- to low-mass stars shed their outer layers to form a planetary nebula and evolve into white dwarfs. While most clusters become dispersed before a large proportion of their members have reached the white dwarf stage, the number of white dwarfs in open clusters is still generally much lower than would be expected, given the age of the cluster and the expected initial mass distribution of the stars. One possible explanation for the lack of white dwarfs is that when a red giant expels its outer layers to become a planetary nebula, a slight asymmetry in the loss of material could give the star a 'kick' of a few kilometres per second, enough to eject it from the cluster. Because of their high density, close encounters between stars in an open cluster are common. For a typical cluster with 1,000 stars with a 0.5 parsec half-mass radius, on average a star will have an encounter with another member every 10 million years. The rate is even higher in denser clusters. These encounters can have a significant impact on the extended circumstellar disks of material that surround many young stars. Tidal perturbations of large disks may result in the formation of massive planets and brown dwarfs, producing companions at distances of 100 AU or more from the host star. Eventual fate Many open clusters are inherently unstable, with a small enough mass that the escape velocity of the system is lower than the average velocity of the constituent stars. These clusters will rapidly disperse within a few million years. In many cases, the stripping away of the gas from which the cluster formed by the radiation pressure of the hot young stars reduces the cluster mass enough to allow rapid dispersal. Clusters that have enough mass to be gravitationally bound once the surrounding nebula has evaporated can remain distinct for many tens of millions of years, but, over time, internal and external processes tend also to disperse them. Internally, close encounters between stars can increase the velocity of a member beyond the escape velocity of the cluster. This results in the gradual 'evaporation' of cluster members. Externally, about every half-billion years or so an open cluster tends to be disturbed by external factors such as passing close to or through a molecular cloud. The gravitational tidal forces generated by such an encounter tend to disrupt the cluster. Eventually, the cluster becomes a stream of stars, not close enough to be a cluster but all related and moving in similar directions at similar speeds. The timescale over which a cluster disrupts depends on its initial stellar density, with more tightly packed clusters persisting longer. Estimated cluster half lives, after which half the original cluster members will have been lost, range from 150–800 million years, depending on the original density. After a cluster has become gravitationally unbound, many of its constituent stars will still be moving through space on similar trajectories, in what is known as a stellar association, moving cluster, or moving group. Several of the brightest stars in the 'Plough' of Ursa Major are former members of an open cluster which now form such an association, in this case the Ursa Major Moving Group. Eventually their slightly different relative velocities will see them scattered throughout the galaxy. A larger cluster is then known as a stream, if we discover the similar velocities and ages of otherwise well-separated stars. Studying stellar evolution When a Hertzsprung–Russell diagram is plotted for an open cluster, most stars lie on the main sequence. The most massive stars have begun to evolve away from the main sequence and are becoming red giants; the position of the turn-off from the main sequence can be used to estimate the age of the cluster. Because the stars in an open cluster are all at roughly the same distance from Earth, and were born at roughly the same time from the same raw material, the differences in apparent brightness among cluster members are due only to their mass. This makes open clusters very useful in the study of stellar evolution, because when comparing one star with another, many of the variable parameters are fixed. The study of the abundances of lithium and beryllium in open-cluster stars can give important clues about the evolution of stars and their interior structures. While hydrogen nuclei cannot fuse to form helium until the temperature reaches about 10 million K, lithium and beryllium are destroyed at temperatures of 2.5 million K and 3.5 million K respectively. This means that their abundances depend strongly on how much mixing occurs in stellar interiors. Through study of their abundances in open-cluster stars, variables such as age and chemical composition can be fixed. Studies have shown that the abundances of these light elements are much lower than models of stellar evolution predict. While the reason for this underabundance is not yet fully understood, one possibility is that convection in stellar interiors can 'overshoot' into regions where radiation is normally the dominant mode of energy transport. Astronomical distance scale Determining the distances to astronomical objects is crucial to understanding them, but the vast majority of objects are too far away for their distances to be directly determined. Calibration of the astronomical distance scale relies on a sequence of indirect and sometimes uncertain measurements relating the closest objects, for which distances can be directly measured, to increasingly distant objects. Open clusters are a crucial step in this sequence. The closest open clusters can have their distance measured directly by one of two methods. First, the parallax (the small change in apparent position over the course of a year caused by the Earth moving from one side of its orbit around the Sun to the other) of stars in close open clusters can be measured, like other individual stars. Clusters such as the Pleiades, Hyades and a few others within about 500 light years are close enough for this method to be viable, and results from the Hipparcos position-measuring satellite yielded accurate distances for several clusters. The other direct method is the so-called moving cluster method. This relies on the fact that the stars of a cluster share a common motion through space. Measuring the proper motions of cluster members and plotting their apparent motions across the sky will reveal that they converge on a vanishing point. The radial velocity of cluster members can be determined from Doppler shift measurements of their spectra, and once the radial velocity, proper motion and angular distance from the cluster to its vanishing point are known, simple trigonometry will reveal the distance to the cluster. The Hyades are the best-known application of this method, which reveals their distance to be 46.3 parsecs. Once the distances to nearby clusters have been established, further techniques can extend the distance scale to more distant clusters. By matching the main sequence on the Hertzsprung–Russell diagram for a cluster at a known distance with that of a more distant cluster, the distance to the more distant cluster can be estimated. The nearest open cluster is the Hyades: The stellar association consisting of most of the Plough stars is at about half the distance of the Hyades, but is a stellar association rather than an open cluster as the stars are not gravitationally bound to each other. The most distant known open cluster in our galaxy is Berkeley 29, at a distance of about 15,000 parsecs. Open clusters, especially super star clusters, are also easily detected in many of the galaxies of the Local Group and nearby: e.g., NGC 346 and the SSCs R136 and NGC 1569 A and B. Accurate knowledge of open cluster distances is vital for calibrating the period–luminosity relationship shown by variable stars such as Cepheid stars, which allows them to be used as standard candles. These luminous stars can be detected at great distances, and are then used to extend the distance scale to nearby galaxies in the Local Group. Indeed, the open cluster designated NGC 7790 hosts three classical Cepheids. RR Lyrae variables are too old to be associated with open clusters, and are instead found in globular clusters. Planets The stars in open clusters can host exoplanets, just like stars outside open clusters. For example, the open cluster NGC 6811 contains two known planetary systems, Kepler-66 and Kepler-67. Additionally, several hot Jupiters are known to exist in the Beehive Cluster.
Physical sciences
Stellar astronomy
Astronomy
22693
https://en.wikipedia.org/wiki/Operator%20overloading
Operator overloading
In computer programming, operator overloading, sometimes termed operator ad hoc polymorphism, is a specific case of polymorphism, where different operators have different implementations depending on their arguments. Operator overloading is generally defined by a programming language, a programmer, or both. Rationale Operator overloading is syntactic sugar, and is used because it allows programming using notation nearer to the target domain and allows user-defined types a similar level of syntactic support as types built into a language. It is common, for example, in scientific computing, where it allows computing representations of mathematical objects to be manipulated with the same syntax as on paper. Operator overloading does not change the expressive power of a language (with functions), as it can be emulated using function calls. For example, consider variables , and of some user-defined type, such as matrices: In a language that supports operator overloading, and with the usual assumption that the '*' operator has higher precedence than the '+' operator, this is a concise way of writing: However, the former syntax reflects common mathematical usage. Examples In this case, the addition operator is overloaded to allow addition on a user-defined type in C++: Time operator+(const Time& lhs, const Time& rhs) { Time temp = lhs; temp.seconds += rhs.seconds; temp.minutes += temp.seconds / 60; temp.seconds %= 60; temp.minutes += rhs.minutes; temp.hours += temp.minutes / 60; temp.minutes %= 60; temp.hours += rhs.hours; return temp; } Addition is a binary operation, which means it has two operands. In C++, the arguments being passed are the operands, and the object is the returned value. The operation could also be defined as a class method, replacing by the hidden argument; However, this forces the left operand to be of type : // The "const" right before the opening curly brace means that |this| is not modified. Time Time::operator+(const Time& rhs) const { Time temp = *this; // |this| should not be modified, so make a copy. temp.seconds += rhs.seconds; temp.minutes += temp.seconds / 60; temp.seconds %= 60; temp.minutes += rhs.minutes; temp.hours += temp.minutes / 60; temp.minutes %= 60; temp.hours += rhs.hours; return temp; } Note that a unary operator defined as a class method would receive no apparent argument (it only works from ): bool Time::operator!() const { return hours == 0 && minutes == 0 && seconds == 0; } The less-than (<) operator is often overloaded to sort a structure or class: class Pair { public: bool operator<(const Pair& p) const { if (x_ == p.x_) { return y_ < p.y_; } return x_ < p.x_; } private: int x_; int y_; }; Like with the previous examples, in the last example operator overloading is done within the class. In C++, after overloading the less-than operator (<), standard sorting functions can be used to sort some classes. Criticisms Operator overloading has often been criticized because it allows programmers to reassign the semantics of operators depending on the types of their operands. For example, the use of the operator in C++ a << b shifts the bits in the variable left by bits if and are of an integer type, but if is an output stream then the above code will attempt to write a to the stream. Because operator overloading allows the original programmer to change the usual semantics of an operator and to catch any subsequent programmers by surprise, it is considered good practice to use operator overloading with care (the creators of Java decided not to use this feature, although not necessarily for this reason). Another, more subtle, issue with operators is that certain rules from mathematics can be wrongly expected or unintentionally assumed. For example, the commutativity of + (i.e. that ) does not always apply; an example of this occurs when the operands are strings, since + is commonly overloaded to perform a concatenation of strings (i.e. yields , while yields ). A typical counter to this argument comes directly from mathematics: While + is commutative on integers (and more generally any complex number), it is not commutative for other "types" of variables. In practice, + is not even always associative, for example with floating-point values due to rounding errors. Another example: In mathematics, multiplication is commutative for real and complex numbers but not commutative in matrix multiplication. Catalog A classification of some common programming languages is made according to whether their operators are overloadable by the programmer and whether the operators are limited to a predefined set. Timeline of operator overloading 1960s The ALGOL 68 specification allowed operator overloading. Extract from the ALGOL 68 language specification (page 177) where the overloaded operators ¬, =, ≠, and abs are defined: 10.2.2. Operations on Boolean Operands a) op ∨ = (bool a, b) bool:( a | true | b ); b) op ∧ = (bool a, b) bool: ( a | b | false ); c) op ¬ = (bool a) bool: ( a | false | true ); d) op = = (bool a, b) bool:( a∧b ) ∨ ( ¬b∧¬a ); e) op ≠ = (bool a, b) bool: ¬(a=b); f) op abs = (bool a)int: ( a | 1 | 0 ); Note that no special declaration is needed to overload an operator, and the programmer is free to create new operators. For dyadic operators their priority compared to other operators can be set: prio max = 9; op max = (int a, b) int: ( a>b | a | b ); op ++ = ( ref int a ) int: ( a +:= 1 ); 1980s Ada supports overloading of operators from its inception, with the publication of the Ada 83 language standard. However, the language designers chose to preclude the definition of new operators. Only extant operators in the language may be overloaded, by defining new functions with identifiers such as "+", "*", "&" etc. Subsequent revisions of the language (in 1995 and 2005) maintain the restriction to overloading of extant operators. In C++, operator overloading is more refined than in ALGOL 68. 1990s Java language designers at Sun Microsystems chose to omit overloading. Python allows operator overloading through the implementation of methods with special names. For example, the addition (+) operator can be overloaded by implementing the method . Ruby allows operator overloading as syntactic sugar for simple method calls. Lua allows operator overloading as syntactic sugar for method calls with the added feature that if the first operand doesn't define that operator, the method for the second operand will be used. 2000s Microsoft added operator overloading to C# in 2001 and to Visual Basic .NET in 2003. Scala treats all operators as methods and thus allows operator overloading by proxy. In Raku, the definition of all operators is delegated to lexical functions, and so, using function definitions, operators can be overloaded or new operators added. For example, the function defined in the Rakudo source for incrementing a Date object with "+" is: multi infix:<+>(Date:D $d, Int:D $x) { Date.new-from-daycount($d.daycount + $x) } Since "multi" was used, the function gets added to the list of multidispatch candidates, and "+" is only overloaded for the case where the type constraints in the function signature are met. While the capacity for overloading includes +, *, >=, the postfix and term i, and so on, it also allows for overloading various brace operators: "[x, y]", "x[y]", "x{y}", and "x(y)". Kotlin has supported operator overloading since its creation.
Technology
Programming languages
null
22709
https://en.wikipedia.org/wiki/Okapi
Okapi
The okapi (; Okapia johnstoni), also known as the forest giraffe, Congolese giraffe and zebra giraffe, is an artiodactyl mammal that is endemic to the northeast Democratic Republic of the Congo in central Africa. However, non-invasive genetic identification has suggested that a population has occurred south-west of the Congo River as well. It is the only species in the genus Okapia. Although the okapi has striped markings reminiscent of zebras, it is most closely related to the giraffe. The okapi and the giraffe are the only living members of the family Giraffidae. The okapi stands about tall at the shoulder and has a typical body length around . Its weight ranges from . It has a long neck, and large, flexible ears. Its coat is a chocolate to reddish brown, much in contrast with the white horizontal stripes and rings on the legs, and white ankles. Male okapis have short, distinct horn-like protuberances on their heads called ossicones, less than in length. Females possess hair whorls, and ossicones are absent. Okapis are primarily diurnal, but may be active for a few hours in darkness. They are essentially solitary, coming together only to breed. Okapis are herbivores, feeding on tree leaves and buds, grasses, ferns, fruits, and fungi. Rut in males and estrus in females does not depend on the season. In captivity, estrus cycles recur every 15 days. The gestational period is around 440 to 450 days long, following which usually a single calf is born. The juveniles are kept in hiding, and nursing takes place infrequently. Juveniles start taking solid food from three months, and weaning takes place at six months. Okapis inhabit canopy forests at altitudes of . The International Union for the Conservation of Nature and Natural Resources classifies the okapi as endangered. Major threats include habitat loss due to logging and human settlement. Illegal mining and extensive hunting for bushmeat and skin have also led to a decline in populations. The Okapi Conservation Project was established in 1987 to protect okapi populations. Etymology and taxonomy Although the okapi was unknown to the Western world until the 20th century, it may have been depicted since the early fifth century BCE on the façade of the Apadana at Persepolis, a gift from the Ethiopian procession to the Achaemenid kingdom. For years, Europeans in Africa had heard of an animal that they came to call the African unicorn. The animal was brought to prominent European attention by speculation on its existence found in press reports covering Henry Morton Stanley's journeys in 1887. In his travelogue of exploring the Congo, Stanley mentioned a kind of donkey that the natives called the atti, which scholars later identified as the okapi. When the British special commissioner in Uganda, Sir Harry Johnston, discovered some Pygmy inhabitants of the Congo being abducted by a showman for exhibition, he rescued them and promised to return them to their homes. The Pygmies fed Johnston's curiosity about the animal mentioned in Stanley's book. Johnston was puzzled by the okapi tracks the natives showed him; while he had expected to be on the trail of some sort of forest-dwelling horse, the tracks were of a cloven-hoofed beast. Though Johnston did not see an okapi himself, he did manage to obtain pieces of striped skin and eventually a skull. From this skull, the okapi was correctly classified as a relative of the giraffe; in 1901, the species was formally recognized as Okapia johnstoni. Okapia johnstoni was first described as Equus johnstoni by English zoologist Philip Lutley Sclater in 1901. The generic name Okapia derives either from the Mbuba name or the related Lese Karo name , while the specific name (johnstoni) is in recognition of Johnston, who first acquired an okapi specimen for science from the Ituri Forest. In 1901, Sclater presented a painting of the okapi before the Zoological Society of London that depicted its physical features with some clarity. Much confusion arose regarding the taxonomical status of this newly discovered animal. Sir Harry Johnston himself called it a Helladotherium, or a relative of other extinct giraffids. Based on the description of the okapi by Pygmies, who referred to it as a "horse", Sclater named the species Equus johnstoni. Subsequently, zoologist Ray Lankester declared that the okapi represented an unknown genus of the Giraffidae, which he placed in its own genus, Okapia, and assigned the name Okapia johnstoni to the species. In 1902, Swiss zoologist Charles Immanuel Forsyth Major suggested the inclusion of O. johnstoni in the extinct giraffid subfamily Palaeotraginae. However, the species was placed in its own subfamily Okapiinae, by Swedish palaeontologist Birger Bohlin in 1926, mainly due to the lack of a cingulum, a major feature of the palaeotragids. In 1986, Okapia was finally established as a sister genus of Giraffa on the basis of cladistic analysis. The two genera together with Palaeotragus constitute the tribe Giraffini. Evolution The earliest members of the Giraffidae first appeared in the early Miocene in Africa, having diverged from the superficially deer-like climacoceratids. Giraffids spread into Europe and Asia by the middle Miocene in a first radiation. Another radiation began in the Pliocene, but was terminated by a decline in diversity in the Pleistocene. Several important primitive giraffids existed more or less contemporaneously in the Miocene (23–10 million years ago), including Canthumeryx, Giraffokeryx, Palaeotragus, and Samotherium. According to palaeontologist and author Kathleen Hunt, Samotherium split into Okapia (18 million years ago) and Giraffa (12 million years ago). However, J. D. Skinner argued that Canthumeryx gave rise to the okapi and giraffe through the latter three genera and that the okapi is the extant form of Palaeotragus. The okapi is sometimes referred to as a living fossil, as it has existed as a species over a long geological time period, and morphologically resembles more primitive forms (e.g. Samotherium). In 2016, a genetic study found that the common ancestor of giraffe and okapi lived about 11.5 million years ago. Description The okapi is a medium-sized giraffid, standing tall at the shoulder. Its average body length is about and its weight ranges from . It has a long neck, and large and flexible ears. In sharp contrast to the white horizontal stripes on the legs and white ankles, the okapi's coat is a chocolate to reddish brown. The distinctive stripes resemble those of a zebra. These features serve as an effective camouflage amidst dense vegetation. The face, throat, and chest are greyish white. Interdigital glands are present on all four feet, and are slightly larger on the front feet. Male okapis have short, hair-covered horn-like structures called ossicones, less than in length, which are similar in form and function to the ossicones of a giraffe. The okapi exhibits sexual dimorphism, with females taller on average, slightly redder, and lacking prominent ossicones, instead possessing hair whorls. The okapi shows several adaptations to its tropical habitat. The large number of rod cells in the retina facilitate night vision, and an efficient olfactory system is present. The large auditory bullae of the temporal bone allow a strong sense of hearing. The dental formula of the okapi is . Teeth are low-crowned and finely cusped, and efficiently cut tender foliage. The large cecum and colon help in microbial digestion, and a quick rate of food passage allows for lower cell wall digestion than in other ruminants. The okapi is easily distinguished from its nearest extant relative, the giraffe. It is much smaller than the giraffe and shares more external similarities with bovids and cervids. Ossicones are present only in the male okapi, while both sexes of giraffe possess this feature. The okapi has large palatine sinuses (hollow cavities in the palate), unique among the giraffids. Morphological features shared between the giraffe and the okapi include a similar gait – both use a pacing gait, stepping simultaneously with the front and the hind leg on the same side of the body, unlike other ungulates that walk by moving alternate legs on either side of the body – and a long, black tongue (longer in the okapi) useful for plucking buds and leaves, as well as for grooming. Ecology and behaviour Okapis are primarily diurnal, but may be active for a few hours in darkness. They are essentially solitary, coming together only to breed. They have overlapping home ranges and typically occur at densities around 0.6 animals per square kilometre. Male home ranges average , while female home ranges average . Males migrate continuously, while females are sedentary. Males often mark territories and bushes with their urine, while females use common defecation sites. Grooming is a common practice, focused at the earlobes and the neck. Okapis often rub their necks against trees, leaving a brown exudate. The male is protective of his territory, but allows females to pass through the domain to forage. Males visit female home ranges at breeding time. Although generally tranquil, the okapi can kick and butt with its head to show aggression. As the vocal cords are poorly developed, vocal communication is mainly restricted to three sounds — "chuff" (contact calls used by both sexes), "moan" (by females during courtship) and "bleat" (by infants under stress). Individuals may engage in Flehmen response, a visual expression in which the animal curls back its upper lips, displays the teeth, and inhales through the mouth for a few seconds. The leopard is the main natural predator of the okapi. Diet Okapis are herbivores, feeding on tree leaves and buds, branches, grasses, ferns, fruits, and fungi. They are unique in the Ituri Forest as they are the only known mammal that feeds solely on understory vegetation, where they use their tongues to selectively browse for suitable plants. The tongue is also used to groom their ears and eyes. They prefer to feed in treefall gaps. The okapi has been known to feed on over 100 species of plants, some of which are known to be poisonous to humans and other animals. Fecal analysis shows that none of those 100 species dominates the diet of the okapi. Staple foods comprise shrubs and lianas. The main constituents of the diet are woody, dicotyledonous species; monocotyledonous plants are not eaten regularly. In the Ituri forest, the okapi feeds mainly upon the plant families Acanthaceae, Ebenaceae, Euphorbiaceae, Flacourtiaceae, Loganiaceae, Rubiaceae, and Violaceae. Reproduction Female okapis become sexually mature at about one-and-a-half years old, while males reach maturity after two years. Rut in males and estrus in females does not depend on the season. In captivity, estrous cycles recur every 15 days. The male and the female begin courtship by circling, smelling, and licking each other. The male shows his interest by extending his neck, tossing his head, and protruding one leg forward. This is followed by mounting and copulation. The gestational period is around 440 to 450 days long, following which usually a single calf is born, weighing . The udder of the pregnant female starts swelling 2 months before parturition, and vulval discharges may occur. Parturition takes 3–4 hours, and the female stands throughout this period, though she may rest during brief intervals. The mother consumes the afterbirth and extensively grooms the infant. Her milk is very rich in proteins and low in fat. As in other ruminants, the infant can stand within 30 minutes of birth. Although generally similar to adults, newborn calves have long hairs around the eye (resembling false eyelashes), a long dorsal mane, and long white hairs in the stripes. These features gradually disappear and give way to the general appearance within a year. The juveniles are kept in hiding, and nursing takes place infrequently. Calves are known not to defecate for the first month or two of life, which is hypothesized to help avoid predator detection in their most vulnerable phase of life. The growth rate of calves is appreciably high in the first few months of life, after which it gradually declines. Juveniles start taking solid food from 3 months, and weaning takes place at 6 months. Ossicone development in males takes 1 year after birth. The okapi's typical lifespan is 20–30 years. Distribution and habitat The okapi is endemic to the Democratic Republic of the Congo, where it occurs north and east of the Congo River. It ranges from the Maiko National Park northward to the Ituri rainforest, then through the river basins of the Rubi, Lake Tele, and Ebola to the west and the Ubangi River further north. Smaller populations exist west and south of the Congo River. It is also common in the Wamba and Epulu areas. It is extinct in Uganda. The okapi inhabits canopy forests at elevations of . It occasionally uses seasonally inundated areas, but does not occur in gallery forests, swamp forests, and habitats disturbed by human settlements. In the wet season, it visits rocky inselbergs that offer forage uncommon elsewhere. Results of research conducted in the late 1980s in a mixed Cynometra forest indicated that the okapi population density averaged 0.53 animals per square kilometre. In 2008, it was recorded in Virunga National Park. There is also evidence that okapis were also observed in the Semuliki Valley in Uganda by Europeans, but later became extinct in the late 1970s. The Semuliki Valley provides a similar habitat to the Congo Basin. Status Threats and conservation The IUCN classifies the okapi as endangered. It is fully protected under Congolese law. The Okapi Wildlife Reserve and Maiko National Park support significant populations of the okapi, though a steady decline in numbers has occurred due to several threats. Other areas of occurrence are the Rubi Tele Hunting Reserve ,the Abumombanzi Reserve,the Sankuru Nature Reserve, the Lomami National Park. Major threats include habitat loss due to logging and human settlement. Extensive hunting for bushmeat and skin and illegal mining have also led to population declines. A threat that has emerged quite recently is the presence of illegal armed groups around protected areas, inhibiting conservation and monitoring actions. A small population occurs north of the Virunga National Park, but lacks protection due to the presence of armed groups in the vicinity. In June 2012, a gang of poachers attacked the headquarters of the Okapi Wildlife Reserve, killing six guards and other staff as well as all 14 okapis at their breeding center. The Okapi Conservation Project, established in 1987, works towards the conservation of the okapi, as well as the growth of the indigenous Mbuti people. In November 2011, the White Oak Conservation center and Jacksonville Zoo and Gardens hosted an international meeting of the Okapi Species Survival Plan and the Okapi European Endangered Species Programme at Jacksonville, which was attended by representatives from zoos from the US, Europe, and Japan. The aim was to discuss the management of captive okapis and arrange support for okapi conservation. Many zoos in North America and Europe currently have okapis in captivity. Okapis in zoos Around 100 okapis are in accredited Association of Zoos and Aquariums (AZA) zoos. The okapi population is managed in America by the AZA's Species Survival Plan, a breeding program that works to ensure genetic diversity in the captive population of endangered animals, while the EEP (European studbook) and ISB (Global studbook) are managed by Antwerp Zoo in Belgium, which was the first zoo to have an Okapi on display (in 1919), as well as one of the most successful in breeding them. In 1937, the Bronx Zoo became the first in North America to acquire an okapi. With one of the most successful breeding programs, 13 calves have been born there between 1991 and 2011. The San Diego Zoo has exhibited okapis since 1956, and their first okapi calf was born in 1962. Since then, there have been more than 60 okapis born at the zoo and the nearby San Diego Zoo Safari Park, the most recent being Mosi, a male calf born on 21 July 2017 at the zoo. The Brookfield Zoo in Chicago has also greatly contributed to the captive population of okapis in accredited zoos. The zoo has had 28 okapi births since 1959. Other North American zoos that exhibit and breed okapis include: Denver Zoo and Cheyenne Mountain Zoo (Colorado); Houston Zoo, Dallas Zoo, and San Antonio Zoo (Texas); Disney's Animal Kingdom, White Oak Conservation, Zoo Miami, and ZooTampa at Lowry Park (Florida); Los Angeles Zoo, Sacramento Zoo, and San Diego Zoo (California); Saint Louis Zoo (Missouri); Cincinnati Zoo and Botanical Garden and Columbus Zoo and Aquarium (Ohio); Memphis Zoo and Nashville Zoo (Tennessee); The Maryland Zoo in Baltimore (Maryland); Sedgwick County Zoo and Tanganyika Wildlife Park (Kansas); Roosevelt Park Zoo (North Dakota); Henry Doorly Zoo and Aquarium (Nebraska); Philadelphia Zoo (Pennsylvania); Potawatomi Zoo (Indiana); Oklahoma City Zoo and Botanical Garden (Oklahoma); Blank Park Zoo (Iowa); and Potter Park Zoo (Michigan). In Europe, zoos that exhibit and breed okapis include: Chester Zoo, London Zoo, Marwell Zoo, The Wild Place, and Yorkshire Wildlife Park (United Kingdom); Dublin Zoo (Ireland); Berlin Zoo, Frankfurt Zoo, Wilhelma Zoo, Wuppertal Zoo, Cologne Zoo, and Leipzig Zoo (Germany); Zoo Basel (Switzerland); Copenhagen Zoo (Denmark); Rotterdam Zoo and Safaripark Beekse Bergen (Netherlands); Antwerp Zoo (Belgium); Dvůr Králové Zoo (Czech Republic); Wrocław Zoo (Poland); Bioparc Zoo de Doué and ZooParc de Beauval (France); and Lisbon Zoo (Portugal). In Asia, three Japanese zoos exhibit okapis: Ueno Zoo in Tokyo; Kanazawa Zoo and Zoorasia in Yokohama.
Biology and health sciences
Giraffidae
Animals
22710
https://en.wikipedia.org/wiki/Ovary
Ovary
The ovary () is a gonad in the female reproductive system that produces ova; when released, an ovum travels through the fallopian tube/oviduct into the uterus. There is an ovary on the left and the right side of the body. The ovaries are endocrine glands, secreting various hormones that play a role in the menstrual cycle and fertility. The ovary progresses through many stages beginning in the prenatal period through menopause. Structure Each ovary is whitish in color and located alongside the lateral wall of the uterus in a region called the ovarian fossa. The ovarian fossa is the region that is bounded by the external iliac artery and in front of the ureter and the internal iliac artery. This area is about 4 cm x 3 cm x 2 cm in size. The ovaries are surrounded by a capsule, and have an outer cortex and an inner medulla. The capsule is of dense connective tissue and is known as the tunica albuginea. Usually, ovulation occurs in one of the two ovaries releasing an egg each menstrual cycle. The side of the ovary closest to the fallopian tube is connected to it by infundibulopelvic ligament, and the other side points downwards attached to the uterus via the ovarian ligament. Other structures and tissues of the ovaries include the hilum. Ligaments The ovaries lie within the peritoneal cavity, on either side of the uterus, to which they are attached via a fibrous cord called the ovarian ligament. The ovaries are uncovered in the peritoneal cavity but are tethered to the body wall via the suspensory ligament of the ovary, which is a posterior extension of the broad ligament of the uterus. The part of the broad ligament of the uterus that covers the ovary is known as the mesovarium. The ovarian pedicle is made up part of the fallopian tube, mesovarium, ovarian ligament, and ovarian blood vessels. Microanatomy The surface of the ovaries is covered with a membrane consisting of a lining of simple cuboidal-to-columnar shaped mesothelium, called the germinal epithelium. The outer layer is the ovarian cortex, consisting of ovarian follicles and stroma in between them. Included in the follicles are the cumulus oophorus, membrana granulosa (and the granulosa cells inside it), corona radiata, zona pellucida, and primary oocyte. Theca of follicle, antrum and liquor folliculi are also contained in the follicle. Also in the cortex is the corpus luteum derived from the follicles. The innermost layer is the ovarian medulla. It can be hard to distinguish between the cortex and medulla, but follicles are usually not found in the medulla. Follicular cells are flat epithelial cells that originate from surface epithelium covering the ovary. They are surrounded by granulosa cells that have changed from flat to cuboidal and proliferated to produce a stratified epithelium. The ovary also contains blood vessels and lymphatics. Function At puberty, the ovary begins to secrete increasing levels of hormones. Secondary sex characteristics begin to develop in response to the hormones. The ovary changes structure and function beginning at puberty. Since the ovaries are able to regulate hormones, they also play an important role in pregnancy and fertility. When egg cells (oocytes) are released from the fallopian tube, a variety of feedback mechanisms stimulate the endocrine system, which cause hormone levels to change. These feedback mechanisms are controlled by the hypothalamus and pituitary glands. Messages or signals from the hypothalamus are sent to the pituitary gland. In turn, the pituitary gland releases hormones to the ovaries. From this signaling, the ovaries release their own hormones. Gamete production The ovaries are the site of production and periodical release of egg cells, the female gametes. In the ovaries, the developing egg cells (or oocytes) mature in the fluid-filled follicles. Typically, only one oocyte develops at a time, but others can also mature simultaneously. Follicles are composed of different types and number of cells according to the stage of their maturation, and their size is indicative of the stage of oocyte development. When an oocyte completes its maturation in the ovary, a surge of luteinizing hormone is secreted by the pituitary gland, which stimulates the release of the oocyte through the rupture of the follicle, a process called ovulation. The follicle remains functional and reorganizes into a corpus luteum, which secretes progesterone in order to prepare the uterus for an eventual implantation of the embryo. Hormone secretion At maturity, ovaries secrete estrogen, androgen, inhibin, and progestogen. In women before menopause, 50% of testosterone is produced by the ovaries and released directly into the blood stream. The other 50% of testosterone in the blood stream is made from conversion of the adrenal pre-androgens ( DHEA and androstenedione) to testosterone in other parts of the body. Estrogen is responsible for the appearance of secondary sex characteristics for females at puberty and for the maturation and maintenance of the reproductive organs in their mature functional state. Progesterone prepares the uterus for pregnancy, and the mammary glands for lactation. Progesterone functions with estrogen by promoting menstrual cycle changes in the endometrium. Ovarian aging As women age, they experience a decline in reproductive performance leading to menopause. This decline is tied to a decline in the number of ovarian follicles. Although about 1 million oocytes are present at birth in the human ovary, only about 500 (about 0.05%) of these ovulate, and the rest do not ovulate. The decline in ovarian reserve appears to occur at a constantly increasing rate with age, and leads to nearly complete exhaustion of the reserve by about age 52. As ovarian reserve and fertility decline with age, there is also a parallel increase in pregnancy failure and meiotic errors resulting in chromosomally abnormal conceptions. The ovarian reserve and fertility perform optimally around 20–30 years of age. Around 45 years of age, the menstrual cycle begins to change and the follicle pool decreases significantly. The events that lead to ovarian aging remain unclear. The variability of aging could include environmental factors, lifestyle habits or genetic factors. Women with an inherited mutation in the DNA repair gene BRCA1 undergo menopause prematurely, suggesting that naturally occurring DNA damages in oocytes are repaired less efficiently in these women, and this inefficiency leads to early reproductive failure. The BRCA1 protein plays a key role in a type of DNA repair termed homologous recombinational repair that is the only known cellular process that can accurately repair DNA double-strand breaks. Titus et al. showed that DNA double-strand breaks accumulate with age in humans and mice in primordial follicles. Primordial follicles contain oocytes that are at an intermediate (prophase I) stage of meiosis. Meiosis is the general process in eukaryotic organisms by which germ cells are formed, and it is likely an adaptation for removing DNA damages, especially double-strand breaks, from germ line DNA (see Meiosis and Origin and function of meiosis). Homologous recombinational repair is especially promoted during meiosis. Titus et al. also found that expression of 4 key genes necessary for homologous recombinational repair of DNA double-strand breaks (BRCA1, MRE11, RAD51 and ATM) decline with age in the oocytes of humans and mice. They hypothesized that DNA double-strand break repair is vital for the maintenance of oocyte reserve and that a decline in efficiency of repair with age plays a key role in ovarian aging. A study identified 290 genetic determinants of ovarian ageing, also found that DNA damage response processes are implicated and suggests that possible effects of extending fertility in women would improve bone health, reduce risk of type 2 diabetes and increase the risk of hormone-sensitive cancers. A variety of testing methods can be used in order to determine fertility based on maternal age. Many of these tests measure levels of hormones FSH, and GnrH. Methods such as measuring AMH (anti-Müllerian hormone) levels, and AFC (antral follicule count) can predict ovarian aging. AMH levels serve as an indicator of ovarian aging since the quality of ovarian follicles can be determined. Clinical significance Ovarian diseases can be classified as endocrine disorders or as a disorders of the reproductive system. If the egg fails to release from the follicle in the ovary an ovarian cyst may form. Small ovarian cysts are common in healthy women. Some women have more follicles than usual (polycystic ovary syndrome), which inhibits the follicles to grow normally and this will cause cycle irregularities. Society and culture Cryopreservation Cryopreservation of ovarian tissue, often called ovarian tissue cryopreservation, is of interest to women who want to preserve their reproductive function beyond the natural limit, or whose reproductive potential is threatened by cancer therapy, for example in hematologic malignancies or breast cancer. The procedure is to take a part of the ovary and carry out slow freezing before storing it in liquid nitrogen whilst therapy is undertaken. Tissue can then be thawed and implanted near the fallopian, either orthotopic (on the natural location) or heterotopic (on the abdominal wall), where it starts to produce new eggs, allowing normal conception to take place. A study of 60 procedures concluded that ovarian tissue harvesting appears to be safe. The ovarian tissue may also be transplanted into mice that are immunocompromised (SCID mice) to avoid graft rejection, and tissue can be harvested later when mature follicles have developed. History In former centuries, medical authors, for example Galen, referred to a woman's ovaries as "female testes". Other animals Birds have only one functional ovary (the left), while the other remains vestigial. In mammals including humans, the female ovary is homologous to the male testicle, in that they are both gonads and endocrine glands. Ovaries of some kind are found in the female reproductive system of many invertebrates that employ sexual reproduction. However, they develop in a very different way in most invertebrates than they do in vertebrates, and are not truly homologous. Many of the features found in human ovaries are common to all vertebrates, including the presence of follicular cells, tunica albuginea, and so on. However, many species produce a far greater number of eggs during their lifetime than do humans, so that, in fish and amphibians, there may be hundreds, or even millions of fertile eggs present in the ovary at any given time. In these species, fresh eggs may be developing from the germinal epithelium throughout life. Corpora lutea are found only in mammals, and in some elasmobranch fish; in other species, the remnants of the follicle are quickly resorbed by the ovary. In birds, reptiles, and monotremes, the egg is relatively large, filling the follicle, and distorting the shape of the ovary at maturity. Amphibians and reptiles have no ovarian medulla; the central part of the ovary is a hollow, lymph-filled space. The ovary of teleosts is also often hollow, but in this case, the eggs are shed into the cavity, which opens into the oviduct. Certain nematodes of the genus Philometra are parasitic in the ovary of marine fishes and can be spectacular, with females as long as , coiled in the ovary of a fish half this length. Although most female vertebrates have two ovaries, this is not the case in all species. In most birds and in platypuses, the right ovary never matures, so that only the left is functional. (Exceptions include the kiwi and some, but not all raptors, in which both ovaries persist.) In some elasmobranchs, only the right ovary develops fully. In the primitive jawless fish, and some teleosts, there is only one ovary, formed by the fusion of the paired organs in the embryo. Additional images
Biology and health sciences
Reproductive system
null
22713
https://en.wikipedia.org/wiki/Opium
Opium
Opium (or poppy tears, scientific name: Lachryma papaveris) is dried latex obtained from the seed capsules of the opium poppy Papaver somniferum. Approximately 12 percent of opium is made up of the analgesic alkaloid morphine, which is processed chemically to produce heroin and other synthetic opioids for medicinal use and for the illegal drug trade. The latex also contains the closely related opiates codeine and thebaine, and non-analgesic alkaloids such as papaverine and noscapine. The traditional, labor-intensive method of obtaining the latex is to scratch ("score") the immature seed pods (fruits) by hand; the latex leaks out and dries to a sticky yellowish residue that is later scraped off and dehydrated. The English word for opium is borrowed from Latin, which in turn comes from (ópion), a diminutive of ὀπός (opós, "juice of a plant"). The word meconium (derived from the Greek for "opium-like", but now used to refer to newborn stools) historically referred to related, weaker preparations made from other parts of the opium poppy or different species of poppies. The production methods have not significantly changed since ancient times. Through selective breeding of the Papaver somniferum plant, the content of the phenanthrene alkaloids morphine, codeine, and to a lesser extent thebaine has been greatly increased. In modern times, much of the thebaine, which often serves as the raw material for the synthesis for oxycodone, hydrocodone, hydromorphone, and other semisynthetic opiates, originates from extracting Papaver orientale or Papaver bracteatum. For the illegal drug trade, the morphine is extracted from the opium latex, reducing the bulk weight by 88%. It is then converted to heroin which is almost twice as potent, and increases the value by a similar factor. The reduced weight and bulk make it easier to smuggle. History The Mediterranean region contains the earliest archeological evidence of human use; the oldest known seeds date back to more than 5000BCE in the Neolithic age with purposes such as food, anaesthetics, and ritual. Evidence from ancient Greece indicates that opium was consumed in several ways, including inhalation of vapors, suppositories, medical poultices, and as a combination with hemlock for suicide. Opium is mentioned in the most important medical texts of the ancient and medieval world, including the Ebers Papyrus and the writings of Dioscorides, Galen, and Avicenna. Widespread medical use of unprocessed opium continued through the American Civil War before giving way to morphine and its successors, which could be injected at a precisely controlled dosage. Ancient use (pre-500 CE) Opium has been actively collected since approximately 3400BCE. At least 17 finds of Papaver somniferum from Neolithic settlements have been reported throughout Switzerland, Germany, and Spain, including the placement of large numbers of poppy seed capsules at a burial site (the Cueva de los Murciélagos, or "Bat Cave", in Spain), which has been carbon-14 dated to 4200BCE. Numerous finds of P. somniferum or P. setigerum from Bronze Age and Iron Age settlements have also been reported. The first known cultivation of opium poppies was in Mesopotamia, approximately 3400BCE, by Sumerians, who called the plant hul gil, the "joy plant". Tablets found at Nippur, a Sumerian spiritual center south of Baghdad, described the collection of poppy juice in the morning and its use in production of opium. Cultivation continued in the Middle East by the Assyrians, who also collected poppy juice in the morning after scoring the pods with an iron scoop; they called the juice aratpa-pal, possibly the root of Papaver. Opium production continued under the Babylonians and Egyptians. Opium was used with poison hemlock to put people quickly and painlessly to death. It was also used in medicine. Spongia somnifera, sponges soaked in opium, were used during surgery. The Egyptians cultivated opium thebaicum in famous poppy fields around 1300BCE. Opium was traded from Egypt by the Phoenicians and Minoans to destinations around the Mediterranean Sea, including Greece, Carthage, and Europe. By 1100BCE, opium was cultivated on Cyprus, where surgical-quality knives were used to score the poppy pods, and opium was cultivated, traded, and smoked. Opium was also mentioned after the Persian conquest of Assyria and Babylonian lands in the . From the earliest finds, opium has appeared to have ritual significance, and anthropologists have speculated ancient priests may have used the drug as a proof of healing power. In Egypt, the use of opium was generally restricted to priests, magicians, and warriors, its invention is credited to Thoth, and it was said to have been given by Isis to Ra as treatment for a headache. A figure of the Minoan "goddess of the narcotics", wearing a crown of three opium poppies, BCE, was recovered from the Sanctuary of Gazi, Crete, together with a simple smoking apparatus. The Greek gods Hypnos (Sleep), Nyx (Night), and Thanatos (Death) were depicted wreathed in poppies or holding them. Poppies also frequently adorned statues of Apollo, Asclepius, Pluto, Demeter, Aphrodite, Kybele and Isis, symbolizing nocturnal oblivion. Islamic societies (500–1500 CE) As the power of the Roman Empire declined, the lands to the south and east of the Mediterranean Sea became incorporated into the Islamic Empires. Some Muslims believe hadiths, such as in Sahih Bukhari, prohibit every intoxicating substance, though the use of intoxicants in medicine has been widely permitted by scholars. Dioscorides' five-volume De Materia Medica, the precursor of pharmacopoeias, remained in use (which was edited and improved in the Arabic versions) from the 1st to 16th centuries, and described opium and the wide range of its uses prevalent in the ancient world. Between 400 and 1200 AD, Arab traders introduced opium to China, and to India by 700 AD. The physician Muhammad ibn Zakariya al-Razi of Persian origin ("Rhazes", 845–930 CE) maintained a laboratory and school in Baghdad, and was a student and critic of Galen; he made use of opium in anesthesia and recommended its use for the treatment of melancholy in Fi ma-la-yahdara al-tabib, "In the Absence of a Physician", a home medical manual directed toward ordinary citizens for self-treatment if a doctor was not available. The renowned Andalusian ophthalmologic surgeon Abu al-Qasim al-Zahrawi ("Abulcasis", 936–1013 CE) relied on opium and mandrake as surgical anesthetics and wrote a treatise, al-Tasrif, that influenced medical thought well into the 16th century. The Persian physician Abū ‘Alī al-Husayn ibn Sina ("Avicenna") described opium as the most powerful of the stupefacients, in comparison to mandrake and other highly effective herbs, in The Canon of Medicine. The text lists medicinal effects of opium, such as analgesia, hypnosis, antitussive effects, gastrointestinal effects, cognitive effects, respiratory depression, neuromuscular disturbances, and sexual dysfunction. It also refers to opium's potential as a poison. Avicenna describes several methods of delivery and recommendations for doses of the drug. This classic text was translated into Latin in 1175 and later into many other languages and remained authoritative until the 19th century. Şerafeddin Sabuncuoğlu used opium in the 14th-century Ottoman Empire to treat migraine headaches, sciatica, and other painful ailments. Reintroduction to Western medicine Manuscripts of Pseudo-Apuleius's 5th-century work from the 10th and 11th centuries refer to the use of wild poppy Papaver agreste or Papaver rhoeas (identified as P. silvaticum) instead of P. somniferum for inducing sleep and relieving pain. The use of Paracelsus' laudanum was introduced to Western medicine in 1527, when Philippus Aureolus Theophrastus Bombastus von Hohenheim, better known by the name Paracelsus, claimed (dubiously) to have returned from wanderings in Arabia with a famous sword, within the pommel of which he kept "Stones of Immortality" compounded from opium thebaicum, citrus juice, and "quintessence of gold". The name "Paracelsus" was a pseudonym signifying him the equal or better of Aulus Cornelius Celsus, whose text, which described the use of opium or a similar preparation, had recently been translated and reintroduced to medieval Europe. The Canon of Medicine, the standard medical textbook that Paracelsus burned in a public bonfire three weeks after being appointed professor at the University of Basel, also described the use of opium, though many Latin translations were of poor quality. Laudanum was originally the 16th-century term for a medicine associated with a particular physician that was widely well-regarded, but became standardized as "tincture of opium", a solution of opium in ethanol, which Paracelsus has been credited with developing. During his lifetime, Paracelsus was viewed as an adventurer who challenged the theories and mercenary motives of contemporary medicine with dangerous chemical therapies, but his therapies marked a turning point in Western medicine. In the 1660s, laudanum was recommended for pain, sleeplessness, and diarrhea by Thomas Sydenham, the renowned "father of English medicine" or "English Hippocrates", to whom is attributed the quote, "Among the remedies which it has pleased Almighty God to give to man to relieve his sufferings, none is so universal and so efficacious as opium." Use of opium as a cure-all was reflected in the formulation of mithridatium described in the 1728 Chambers Cyclopedia, which included true opium in the mixture. Eventually, laudanum became readily available and extensively used by the 18th century in Europe, especially England. Compared to other chemicals available to 18th century regular physicians, opium was a benign alternative to arsenic, mercury, or emetics, and it was remarkably successful in alleviating a wide range of ailments. Due to the constipation often produced by the consumption of opium, it was one of the most effective treatments for cholera, dysentery, and diarrhea. As a cough suppressant, opium was used to treat bronchitis, tuberculosis, and other respiratory illnesses. Opium was additionally prescribed for rheumatism and insomnia. Medical textbooks even recommended its use by people in good health, to "optimize the internal equilibrium of the human body". During the 18th century, opium was found to be a good remedy for nervous disorders. Due to its sedative and tranquilizing properties, it was used to quiet the minds of those with psychosis, help with people who were considered insane, and also to help treat patients with insomnia. However, despite its medicinal values in these cases, it was noted that in cases of psychosis, it could cause anger or depression, and due to the drug's euphoric effects, it could cause depressed patients to become more depressed after the effects wore off because they would get used to being high. The standard medical use of opium persisted well into the 19th century. US president William Henry Harrison was treated with opium in 1841, and in the American Civil War, the Union Army used 175,000 lb (80,000 kg) of opium tincture and powder and about 500,000 opium pills. During this time of popularity, users called opium "God's Own Medicine". One reason for the increase in opiate consumption in the United States during the 19th century was the prescribing and dispensing of legal opiates by physicians and pharmacists to women with "female complaints" (mostly to relieve menstrual pain and hysteria). Because opiates were viewed as more humane than punishment or restraint, they were often used to treat the mentally ill. Between 150,000 and 200,000 opiate addicts lived in the United States in the late 19th century and between two-thirds and three-quarters of these addicts were women. Opium addiction in the later 19th century received a hereditary definition. Dr. George Beard in 1869 proposed his theory of neurasthenia, a hereditary nervous system deficiency that could predispose an individual to addiction. Neurasthenia was increasingly tied in medical rhetoric to the "nervous exhaustion" suffered by many a white-collar worker in the increasingly hectic and industrialized U.S. life—the most likely potential clients of physicians. Recreational use in Europe, the Middle East and the US (11th to 19th centuries) Soldiers returning home from the Crusades in the 11th to 13th century brought opium with them. Opium is said to have been used for recreational purposes from the 14th century onwards in Muslim societies. Ottoman and European testimonies confirm that from the 16th to the 19th centuries Anatolian opium was eaten in Constantinople as much as it was exported to Europe. In 1573, for instance, a Venetian visitor to the Ottoman Empire observed many of the Turkish natives of Constantinople regularly drank a "certain black water made with opium" that makes them feel good, but to which they become so addicted, if they try to go without, they will "quickly die". From drinking it, dervishes claimed the drugs bestowed them with visionary glimpses of future happiness. Indeed, the Ottoman Empire supplied the West with opium long before China and India. Extensive textual and pictorial sources also show that poppy cultivation and opium consumption were widespread in Safavid Iran and Mughal India. England In England, opium fulfilled a "critical" role, as it did other societies, in addressing multifactorial pain, cough, dysentery, diarrhea, as argued by Virginia Berridge. A medical panacea of the 19th century, "any respectable person" could purchase a range of hashish pastes and (later) morphine with complementary injection kit. Thomas De Quincey's Confessions of an English Opium-Eater (1822), one of the first and most famous literary accounts of opium addiction written from the point of view of an addict, details the pleasures and dangers of the drug. In the book, it is not Ottoman, nor Chinese, addicts about whom he writes, but English opium users: "I question whether any Turk, of all that ever entered the paradise of opium-eaters, can have had half the pleasure I had." De Quincey writes about the great English Romantic poet Samuel Taylor Coleridge (1772–1834), whose "Kubla Khan" is also widely considered to be a poem of the opium experience. Coleridge began using opium in 1791 after developing jaundice and rheumatic fever, and became a full addict after a severe attack of the disease in 1801, requiring 80–100 drops of laudanum daily. China Recreational use in China The earliest clear description of the use of opium as a recreational drug in China came from Xu Boling, who wrote in 1483 that opium was "mainly used to aid masculinity, strengthen sperm and regain vigor", and that it "enhances the art of alchemists, sex and court ladies". He also described an expedition sent by the Ming dynasty Chenghua Emperor in 1483 to procure opium for a price "equal to that of gold" in Hainan, Fujian, Zhejiang, Sichuan and Shaanxi, where it is close to the western lands of Xiyu. A century later, Li Shizhen listed standard medical uses of opium in his renowned Compendium of Materia Medica (1578), but also wrote that "lay people use it for the art of sex," in particular the ability to "arrest seminal emission". This association of opium with sex continued in China until the end of the 19th century. Opium smoking began as a privilege of the elite and remained a great luxury into the early 19th century. However, by 1861, Wang Tao wrote that opium was used even by rich peasants, and even a small village without a rice store would have a shop where opium was sold. Recreational use of opium was part of a civilized and mannered ritual, akin to an East Asian tea ceremony, prior to the extensive prohibitions that came later. In places of gathering, often tea shops, or a person's home servings of opium were offered as a form of greeting and politeness. Often served with tea (in China) and with specific and fine utensils and beautifully carved wooden pipes. The wealthier the smoker, the finer and more expensive material used in ceremony. The image of seedy underground, destitute smokers were often generated by anti-opium narratives and became a more accurate image of opium use following the effects of large scale opium prohibition in the 1880s. Prohibitions in China Opium prohibition in China began in 1729, yet was followed by nearly two centuries of increasing opium use. A massive destruction of opium by an emissary of the Chinese Daoguang Emperor in an attempt to stop opium smuggling by the British led to the First Opium War (18391842), in which Britain defeated China. After 1860, opium use continued to increase with widespread domestic production in China. By 1905, an estimated 25 percent of the male population were regular consumers of the drug. Recreational use of opium elsewhere in the world remained rare into late in the 19th century, as indicated by ambivalent reports of opium usage. In 1906, 41,000 tons were produced, but because 39,000 tons of that year's opium were consumed in China, overall usage in the rest of the world was much lower. These figures from 1906 have been criticized as overestimates. Smoking of opium came on the heels of tobacco smoking and may have been encouraged by a brief ban on the smoking of tobacco by the Ming emperor. The prohibition ended in 1644 with the coming of the Qing dynasty, which encouraged smokers to mix in increasing amounts of opium. In 1705, Wang Shizhen wrote, "nowadays, from nobility and gentlemen down to slaves and women, all are addicted to tobacco." Tobacco in that time was frequently mixed with other herbs (this continues with clove cigarettes to the modern day), and opium was one component in the mixture. Tobacco mixed with opium was called madak (or madat) and became popular throughout China and its seafaring trade partners (such as Taiwan, Java, and the Philippines) in the 17th century. In 1712, Engelbert Kaempfer described addiction to madak: "No commodity throughout the Indies is retailed with greater profit by the Batavians than opium, which [its] users cannot do without, nor can they come by it except it be brought by the ships of the Batavians from Bengal and Coromandel." Fueled in part by the 1729 ban on madak, which at first effectively exempted pure opium as a potentially medicinal product, the smoking of pure opium became more popular in the 18th century. In 1736, the smoking of pure opium was described by Huang Shujing, involving a pipe made from bamboo rimmed with silver, stuffed with palm slices and hair, fed by a clay bowl in which a globule of molten opium was held over the flame of an oil lamp. This elaborate procedure, requiring the maintenance of pots of opium at just the right temperature for a globule to be scooped up with a needle-like skewer for smoking, formed the basis of a craft of "paste-scooping" by which servant girls could become prostitutes as the opportunity arose. Chinese diaspora in the West The Chinese Diaspora in the West (1800s to 1949) first began to flourish during the 19th century due to famine and political upheaval, as well as rumors of wealth to be had outside of Southeast Asia. Chinese emigrants to cities such as San Francisco, London, and New York City brought with them the Chinese manner of opium smoking, and the social traditions of the opium den. The Indian Diaspora distributed opium-eaters in the same way, and both social groups survived as "lascars" (seamen) and "coolies" (manual laborers). French sailors provided another major group of opium smokers, having gotten the habit while in French Indochina, where the drug was promoted and monopolized by the colonial government as a source of revenue. Among white Europeans, opium was more frequently consumed as laudanum or in patent medicines. Britain's All-India Opium Act of 1878 formalized ethnic restrictions on the use of opium, limiting recreational opium sales to registered Indian opium-eaters and Chinese opium-smokers only and prohibiting its sale to workers from Burma. Likewise, in San Francisco, Chinese immigrants were permitted to smoke opium, so long as they refrained from doing so in the presence of whites. Because of the low social status of immigrant workers, contemporary writers and media had little trouble portraying opium dens as seats of vice, white slavery, gambling, knife- and revolver-fights, and a source for drugs causing deadly overdoses, with the potential to addict and corrupt the white population. By 1919, anti-Chinese riots attacked Limehouse, the Chinatown of London. Chinese men were deported for playing keno and sentenced to hard labor for opium possession. Due to this, both the immigrant population and the social use of opium fell into decline. Yet despite lurid literary accounts to the contrary, 19th-century London was not a hotbed of opium smoking. The total lack of photographic evidence of opium smoking in Britain, as opposed to the relative abundance of historical photos depicting opium smoking in North America and France, indicates the infamous Limehouse opium-smoking scene was little more than fantasy on the part of British writers of the day, who were intent on scandalizing their readers while drumming up the threat of the "yellow peril". Prohibition and conflict in China A large scale opium prohibition attempt began in 1729, when the Qing Yongzheng Emperor, disturbed by madak smoking at court and carrying out the government's role of upholding Confucian virtues, officially prohibited the sale of opium, except for a small amount for medicinal purposes. The ban punished sellers and opium den keepers, but not users of the drug. Opium was banned completely in 1799, and this prohibition continued until 1860. During the Qing dynasty, China opened itself to foreign trade under the Canton System through the port of Guangzhou (Canton), with traders from the East India Company visiting the port by the 1690s. Due to the growing British demand for Chinese tea and the Chinese Emperor's lack of interest in British commodities other than silver, British traders resorted to trade in opium as a high-value commodity for which China was not self-sufficient. The English traders had been purchasing small amounts of opium from India for trade since Ralph Fitch first visited in the mid-16th century. Trade in opium was standardized, with production of balls of raw opium, , 30% water content, wrapped in poppy leaves and petals, and shipped in chests of (one picul). Chests of opium were sold in auctions in Calcutta with the understanding that the independent purchasers would then smuggle it into China. China had a positive balance sheet in trading with the British, which led to a decrease of the British silver stocks. Therefore, the British tried to encourage Chinese opium use to enhance their balance, and they delivered it from Indian provinces under British control. In India, its cultivation, as well as the manufacture and traffic to China, were subject to the British East India Company (BEIC), as a strict monopoly of the British government. There was an extensive and complicated system of BEIC agencies involved in the supervision and management of opium production and distribution in India. Bengal opium was highly prized, commanding twice the price of the domestic Chinese product, which was regarded as inferior in quality. Some competition came from the newly independent United States, which began to compete in Guangzhou, selling Turkish opium in the 1820s. Portuguese traders also brought opium from the independent Malwa states of western India, although by 1820, the British were able to restrict this trade by charging "pass duty" on the opium when it was forced to pass through Bombay to reach an entrepot. Despite drastic penalties and continued prohibition of opium until 1860, opium smuggling rose steadily from 200 chests per year under the Yongzheng Emperor to 1,000 under the Qianlong Emperor, 4,000 under the Jiaqing Emperor, and 30,000 under the Daoguang Emperor. This illegal sale of opium, which has been called "the most long continued and systematic international crime of modern times", became one of the world's most valuable single commodity trades, and between 1814 and 1850, sucked out 11 percent of China's money supply. In response to the ever-growing number of Chinese people becoming addicted to opium, the Qing Daoguang Emperor took strong action to halt the smuggling of opium, including the seizure of cargo. In 1838, the Chinese Commissioner Lin Zexu destroyed 20,000 chests of opium (approximately 2,660,000 pounds) in Guangzhou in a river. Given that a chest of opium was worth nearly in 1800, this was a substantial economic loss. The British queen Victoria, not willing to replace the cheap opium with costly silver, began the First Opium War in 1840, the British winning Hong Kong and trade concessions in the first of a series of Unequal Treaties. The opium trade incurred intense enmity from the later British Prime Minister William Ewart Gladstone. As a member of Parliament, Gladstone called it "most infamous and atrocious" referring to the opium trade between China and British India in particular. Gladstone was fiercely against both of the Opium Wars Britain waged in China in the First Opium War initiated in 1840 and the Second Opium War initiated in 1857, denounced British violence against Chinese, and was ardently opposed to the British trade in opium to China. Gladstone lambasted it as "Palmerston's Opium War" and said that he felt "in dread of the judgments of God upon England for our national iniquity towards China" in May 1840. A famous speech was made by Gladstone in Parliament against the First Opium War. Gladstone criticized it as "a war more unjust in its origin, a war more calculated in its progress to cover this country with permanent disgrace". His hostility to opium stemmed from the effects of opium brought upon his sister Helen. Due to the First Opium war brought on by Palmerston, there was initial reluctance to join the government of Peel on part of Gladstone before 1841. Following China's defeat in the Second Opium War in 1858, China was forced to legalize opium and began massive domestic production. Importation of opium peaked in 1879 at 6,700 tons, and by 1906, China was producing 85 percent of the world's opium, some 35,000 tons, and 27 percent of its adult male population regularly used opium13.5million people consuming 39,000 tons of opium yearly. From 1880 to the beginning of the Communist era, the British attempted to discourage the use of opium in China, but this effectively promoted the use of morphine, heroin, and cocaine, further exacerbating the problem of addiction. Scientific evidence of the pernicious nature of opium use was largely undocumented in the 1890s, when Protestant missionaries in China decided to strengthen their opposition to the trade by compiling data which would demonstrate the harm the drug did. Faced with the problem that many Chinese associated Christianity with opium, partly due to the arrival of early Protestant missionaries on opium clippers, at the 1890 Shanghai Missionary Conference, they agreed to establish the Permanent Committee for the Promotion of Anti-Opium Societies in an attempt to overcome this problem and to arouse public opinion against the opium trade. The members of the committee were John Glasgow Kerr, MD, American Presbyterian Mission in Guangzhou (Canton); B.C. Atterbury, MD, American Presbyterian Mission in Beijing (Peking); Archdeacon Arthur E. Moule, Church Missionary Society in Shanghai; Henry Whitney, MD, American Board of Commissioners for foreign Missions in Fuzhou; the Rev. Samuel Clarke, China Inland Mission in Guiyang; the Rev. Arthur Gostick Shorrock, English Baptist Mission in Taiyuan; and the Rev. Griffith John, London Mission Society in Hankou. These missionaries were generally outraged over the British government's Royal Commission on Opium visiting India but not China. Accordingly, the missionaries first organized the Anti-Opium League in China among their colleagues in every mission station in China. American missionary Hampden Coit DuBose acted as first president. This organization, which had elected national officers and held an annual national meeting, was instrumental in gathering data from every Western-trained medical doctor in China, which was then published as William Hector Park compiled Opinions of Over 100 Physicians on the Use of Opium in China (Shanghai: American Presbyterian Mission Press, 1899). The vast majority of these medical doctors were missionaries; the survey also included doctors who were in private practices, particularly in Shanghai and Hong Kong, as well as Chinese who had been trained in medical schools in Western countries. In England, the home director of the China Inland Mission, Benjamin Broomhall, was an active opponent of the opium trade, writing two books to promote the banning of opium smoking: The Truth about Opium Smoking and The Chinese Opium Smoker. In 1888, Broomhall formed and became secretary of the Christian Union for the Severance of the British Empire with the Opium Traffic and editor of its periodical, National Righteousness. He lobbied the British Parliament to stop the opium trade. He and James Laidlaw Maxwell appealed to the London Missionary Conference of 1888 and the Edinburgh Missionary Conference of 1910 to condemn the continuation of the trade. When Broomhall was dying, his son Marshall read to him from The Times the welcome news that an agreement had been signed ensuring the end of the opium trade within two years. Official Chinese resistance to opium was renewed on September 20, 1906, with an antiopium initiative intended to eliminate the drug problem within 10 years. The program relied on the turning of public sentiment against opium, with mass meetings at which opium paraphernalia were publicly burned, as well as coercive legal action and the granting of police powers to organizations such as the Fujian Anti-Opium Society. Smokers were required to register for licenses for gradually reducing rations of the drug. Action against opium farmers centered upon a highly repressive incarnation of law enforcement in which rural populations had their property destroyed, their land confiscated and/or were publicly tortured, humiliated and executed. Addicts sometimes turned to missionaries for treatment for their addiction, though many associated these foreigners with the drug trade. The program was counted as a substantial success, with a cessation of direct British opium exports to China (but not Hong Kong) and most provinces declared free of opium production. Nonetheless, the success of the program was only temporary, with opium use rapidly increasing during the disorder following the death of Yuan Shikai in 1916. Opium farming also increased, peaking in 1930 when the League of Nations singled China out as the primary source of illicit opium in East and Southeast Asia. Many local powerholders facilitated the trade during this period to finance conflicts over territory and political campaigns. In some areas food crops were eradicated to make way for opium, contributing to famines in Guizhou and Shaanxi Provinces between 1921 and 1923, and food deficits in other provinces. Beginning in 1915, Chinese nationalist groups came to describe the period of military losses and Unequal Treaties as the "Century of National Humiliation", later defined to end with the conclusion of the Chinese Civil War in 1949. In the northern provinces of Ningxia and Suiyuan in China, Chinese Muslim General Ma Fuxiang both prohibited and engaged in the opium trade. It was hoped that Ma Fuxiang would have improved the situation, since Chinese Muslims were well known for opposition to smoking opium. Ma Fuxiang officially prohibited opium and made it illegal in Ningxia, but the Guominjun reversed his policy; by 1933, people from every level of society were abusing the drug, and Ningxia was left in destitution. In 1923, an officer of the Bank of China from Baotou found out that Ma Fuxiang was assisting the drug trade in opium which helped finance his military expenses. He earned from taxing those sales in 1923. General Ma had been using the bank, a branch of the Government of China's exchequer, to arrange for silver currency to be transported to Baotou to use it to sponsor the trade. The opium trade under the Chinese Communist Party was important to its finances in the 1940s. Peter Vladimirov's diary provided a first hand account. Chen Yung-fa provided a detailed historical account of how the opium trade was essential to the economy of Yan'an during this period. Mitsubishi and Mitsui were involved in the opium trade during the Japanese occupation of China. Mao Zedong government is generally credited with eradicating both consumption and production of opium during the 1950s using unrestrained repression and social reform. Ten million addicts were forced into compulsory treatment, dealers were executed, and opium-producing regions were planted with new crops. Remaining opium production shifted south of the Chinese border into the Golden Triangle region. The remnant opium trade primarily served Southeast Asia, but spread to American soldiers during the Vietnam War; based on a study of opiate use in soldiers returning to the United States in 1971, 20 percent of participants were dependent enough to experience withdrawal symptoms. Prohibition outside China There were no legal restrictions on the importation or use of opium in the United States until the San Francisco Opium Den Ordinance, which banned dens for public smoking of opium in 1875, a measure fueled by anti-Chinese sentiment and the perception that whites were starting to frequent the dens. This was followed by an 1891 California law requiring that narcotics carry warning labels and that their sales be recorded in a registry; amendments to the California Pharmacy and Poison Act in 1907 made it a crime to sell opiates without a prescription, and bans on possession of opium or opium pipes in 1909 were enacted. At the US federal level, the legal actions taken reflected constitutional restrictions under the enumerated powers doctrine prior to reinterpretation of the commerce clause, which did not allow the federal government to enact arbitrary prohibitions, but did permit arbitrary taxation. Beginning in 1883, opium importation was taxed at to per pound, until the Opium Exclusion Act of 1909 prohibited the importation of opium altogether. In a similar manner, the Harrison Narcotics Tax Act of 1914, passed in fulfillment of the International Opium Convention of 1912, nominally placed a tax on the distribution of opiates, but served as a de facto prohibition of the drugs. Today, opium is regulated by the Drug Enforcement Administration under the Controlled Substances Act. Following passage of a Colonial Australian law in 1895, Queensland's Aboriginals Protection and Restriction of the Sale of Opium Act 1897 addressed opium addiction among Aboriginal people, though it soon became a general vehicle for depriving them of basic rights by administrative regulation. By 1905 all Australian states and territories had passed similar laws making prohibitions to Opium sale. Smoking and possession was prohibited in 1908. Hardening of Canadian attitudes toward Chinese opium users and fear of a spread of the drug into the white population led to the effective criminalization of opium for nonmedical use in Canada between 1908 and the mid-1920s. In 1909, the International Opium Commission was founded, and by 1914, 34 nations had agreed that the production and importation of opium should be diminished. In 1924, 62 nations participated in a meeting of the commission. Subsequently, this role passed to the League of Nations, and all signatory nations agreed to prohibit the import, sale, distribution, export, and use of all narcotic drugs, except for medical and scientific purposes. This role was later taken up by the International Narcotics Control Board of the United Nations under Article 23 of the Single Convention on Narcotic Drugs, and subsequently under the Convention on Psychotropic Substances. Opium-producing nations are required to designate a government agency to take physical possession of licit opium crops as soon as possible after harvest and conduct all wholesaling and exporting through that agency. Indochina tax From 1897 to 1902, Paul Doumer (later President of France) was Governor-General of French Indochina. Upon his arrival the colonies were losing millions of francs each year. Determined to put them on a paying basis he levied taxes on various products, opium among them. The Vietnamese, Cambodians and Laotians who could or would not pay these taxes, lost their houses and land, and often became day laborers. Evidently, resorting to this means of gaining income gave France a vested interest in the continuation of opium use among the population of Indochina. Regulation in Britain and the United States Before the 1920s, regulation in Britain was controlled by pharmacists. Pharmacists who were found to have prescribed opium for illegitimate uses and anyone found to have sold opium without proper qualifications would be prosecuted. With the passing of the Rolleston Act in Britain in 1926, doctors were allowed to prescribe opiates such as morphine and heroin if they believed their patients demonstrated a medical need. Because addiction was viewed as a medical problem rather than an indulgence, doctors were permitted to allow patients to wean themselves off opiates rather than cutting off any opiate use altogether. The passing of the Rolleston Act put the control of opium use in the hands of medical doctors instead of pharmacists. Later in the 20th century, addiction to opiates, especially heroin in young people, continued to rise and so the sale and prescription of opiates was limited to doctors in treatment centres. If these doctors were found to be prescribing opiates without just cause, then they could lose their licence to practice or prescribe drugs. Abuse of opium in the United States began in the late 19th century and was largely associated with Chinese immigrants. During this time the use of opium had little stigma; the drug was used freely until 1882 when a law was passed to confine opium smoking to specific dens. Until the full ban on opium-based products came into effect just after the beginning of the twentieth century, physicians in the US considered opium a miracle drug that could help with many ailments. Therefore, the ban on said products was more a result of negative connotations towards its use and distribution by Chinese immigrants who were heavily persecuted during this particular period in history. As the 19th century progressed however, doctor Hamilton Wright worked to decrease the use of opium in the US by submitting the Harrison Act to congress. This act put taxes and restrictions on the sale and prescription of opium, as well as trying to stigmatize the opium poppy and its derivatives as "demon drugs", to try to scare people away from them. This act and the stigma of a demon drug on opium, led to the criminalization of people that used opium-based products. It made the use and possession of opium and any of its derivatives illegal. The restrictions were recently redefined by the Federal Controlled Substances Act of 1970. 20th-century use Opium production in China and the rest of East Asia was nearly wiped out after WWII, however, sustained covert support by the United States Central Intelligence Agency for the Thai Northern Army and the Chinese Nationalist Kuomintang army invading Burma facilitated production and trafficking of the drug from Southeast Asia for decades, with the region becoming a major source of world supplies. During the Communist era in Eastern Europe, poppy stalks sold in bundles by farmers were processed by users with household chemicals to make kompot ("Polish heroin"), and poppy seeds were used to produce koknar, an opiate. Obsolescence Globally, opium has gradually been superseded by a variety of purified, semi-synthetic, and synthetic opioids with progressively stronger effects, and by other general anesthetics. This process began in 1804, when Friedrich Wilhelm Adam Sertürner first isolated morphine from the opium poppy. The process continued until 1817, when Sertürner published his results after thirteen years of research and a nearly disastrous trial on himself and three boys. The great advantage of purified morphine was that a patient could be treated with a known dose—whereas with raw plant material, as Gabriel Fallopius once lamented, "if soporifics are weak they do not help; if they are strong they are exceedingly dangerous." Morphine was the first pharmaceutical isolated from a natural product, and this success encouraged the isolation of other alkaloids: by 1820, isolations of noscapine, strychnine, veratrine, colchicine, caffeine, and quinine were reported. Morphine sales began in 1827, by Heinrich Emanuel Merck of Darmstadt, and helped him expand his family pharmacy into the Merck KGaA pharmaceutical company. Codeine was isolated in 1832 by Pierre Jean Robiquet. The use of diethyl ether and chloroform for general anesthesia began in 1846–1847, and rapidly displaced the use of opiates and tropane alkaloids from Solanaceae due to their relative safety. Heroin, the first semi-synthetic opioid, was first synthesized in 1874, but was not pursued until its rediscovery in 1897 by Felix Hoffmann at the Bayer pharmaceutical company in Elberfeld, Germany. From 1898 to 1910 heroin was marketed as a non-addictive morphine substitute and cough medicine for children. Because the lethal dose of heroin was viewed as a hundred times greater than its effective dose, heroin was advertised as a safer alternative to other opioids. By 1902, sales made up 5 percent of the company's profits, and "heroinism" had attracted media attention. Oxycodone, a thebaine derivative similar to codeine, was introduced by Bayer in 1916 and promoted as a less-addictive analgesic. Preparations of the drug such as oxycodone with paracetamol and extended release oxycodone remain popular to this day. A range of synthetic opioids such as methadone (1937), pethidine (1939), fentanyl (late 1950s), and derivatives thereof have been introduced, and each is preferred for certain specialized applications. Nonetheless, morphine remains the drug of choice for American combat medics, who carry packs of syrettes containing 16 milligrams each for use on severely wounded soldiers. No drug has been found that can match the painkilling effect of opioids without also duplicating much of their addictive potential. Modern production and use Opium was prohibited in many countries during the early 20th century, leading to the modern pattern of opium production as a precursor for illegal recreational drugs or tightly regulated, highly taxed, legal prescription drugs. In 1980, 2,000 tons of opium supplied all legal and illegal uses. Worldwide production in 2006 was 6610 tonnes—about one-fifth the level of production in 1906; since then, opium production has fallen. In 2002, the price for one kilogram of opium was for the farmer, for purchasers in Afghanistan, and on the streets of Europe before conversion into heroin. Opium production increased considerably, surpassing 5,000 tons in 2002 and reaching 8,600 tons in Afghanistan and 840 tons in the Golden Triangle in 2014. The World Health Organization has estimated that current production of opium would need to increase fivefold to account for total global medical need. Solar energy panels in use in Afghanistan have allowed farmers to dig their wells deeper, leading to a bumper crop of opium year after year. In a 2023 report, poppy cultivation in southern Afghanistan was reduced by over 80% as a result of Taliban campaigns to stop its use toward opium. This included a 99% reduction of opium growth in the Helmand Province. In November 2023, a U.N report showed that in the entirety of Afghanistan, poppy cultivation dropped by over 95%, removing it from its place as being the world's largest opium producer. Papaver somniferum Opium poppies are popular and attractive garden plants, whose flowers vary greatly in color, size and form. A modest amount of domestic cultivation in private gardens is not usually subject to legal controls. In part, this tolerance reflects variation in addictive potency. A cultivar for opium production, Papaver somniferum L. elite, contains 91.2 percent morphine, codeine, and thebaine in its latex alkaloids, whereas in the latex of the condiment cultivar "Marianne", these three alkaloids total only 14.0 percent. The remaining alkaloids in the latter cultivar are primarily narcotoline and noscapine. Seed capsules can be dried and used for decorations, but they also contain morphine, codeine, and other alkaloids. These pods can be boiled in water to produce a bitter tea that induces a long-lasting intoxication. If allowed to mature, poppy pods (poppy straw) can be crushed and used to produce lower quantities of morphinans. In poppies subjected to mutagenesis and selection on a mass scale, researchers have been able to use poppy straw to obtain large quantities of oripavine, a precursor to opioids and antagonists such as naltrexone. Although millennia older, the production of poppy head decoctions can be seen as a quick-and-dirty variant of the Kábáy poppy straw process, which since its publication in 1930 has become the major method of obtaining licit opium alkaloids worldwide, as discussed in Morphine. Poppy seeds are a common and flavorsome topping for breads and cakes. One gram of poppy seeds contains up to 33 micrograms of morphine and 14 micrograms of codeine, and the Substance Abuse and Mental Health Services Administration in the United States formerly mandated that all drug screening laboratories use a standard cutoff of 300 nanograms per milliliter in urine samples. A single poppy seed roll (0.76 grams of seeds) usually did not produce a positive drug test, but a positive result was observed from eating two rolls. A slice of poppy seed cake containing nearly five grams of seeds per slice produced positive results for 24 hours. Such results are viewed as false positive indications of drug use and were the basis of a legal defense. On November 30, 1998, the standard cutoff was increased to 2000 nanograms (two micrograms) per milliliter. Confirmation by gas chromatography-mass spectrometry will distinguish amongst opium and variants including poppy seeds, heroin, and morphine and codeine pharmaceuticals by measuring the morphine:codeine ratio and looking for the presence of noscapine and acetylcodeine, the latter of which is only found in illicitly produced heroin, and heroin metabolites such as 6-monoacetylmorphine. Harvesting and processing When grown for opium production, the skin of the ripening pods of these poppies is scored by a sharp blade at a time carefully chosen so that rain, wind, and dew cannot spoil the exudation of white, milky latex, usually in the afternoon. Incisions are made while the pods are still raw, with no more than a slight yellow tint, and must be shallow to avoid penetrating hollow inner chambers or loculi while cutting into the lactiferous vessels. In the Indian Subcontinent, Afghanistan, Central Asia and Iran, the special tool used to make the incisions is called a nushtar or "nishtar" (from Persian, meaning a lancet) and carries three or four blades three millimeters apart, which are scored upward along the pod. Incisions are made three or four times at intervals of two to three days, and each time the "poppy tears", which dry to a sticky brown resin, are collected the following morning. One acre harvested in this way can produce three to five kilograms of raw opium. In the Soviet Union, pods were typically scored horizontally, and opium was collected three times, or else one or two collections were followed by isolation of opiates from the ripe capsules. Oil poppies, an alternative strain of P. somniferum, were also used for production of opiates from their capsules and stems. A traditional Chinese method of harvesting opium latex involved cutting off the heads and piercing them with a coarse needle then collecting the dried opium 24 to 48 hours later. Raw opium may be sold to a merchant or broker on the black market, but it usually does not travel far from the field before it is refined into morphine base, because pungent, jelly-like raw opium is bulkier and harder to smuggle. Crude laboratories in the field are capable of refining opium into morphine base by a simple acid-base extraction. A sticky, brown paste, morphine base is pressed into bricks and sun-dried, and can either be smoked, prepared into other forms or processed into heroin. Other methods of preparation (besides smoking), include processing into regular opium tincture (tinctura opii), laudanum, paregoric (tinctura opii camphorata), herbal wine (e.g., vinum opii), opium powder (pulvis opii), opium sirup (sirupus opii) and opium extract (extractum opii). Vinum opii is made by combining sugar, white wine, cinnamon, and cloves. Opium syrup is made by combining 97.5 part sugar syrup with 2.5 parts opium extract. Opium extract (extractum opii) finally can be made by macerating raw opium with water. To make opium extract, 20 parts water are combined with 1 part raw opium which has been boiled for 5 minutes (the latter to ease mixing). Heroin is widely preferred because of increased potency. One study in postaddicts found heroin to be approximately 2.2 times more potent than morphine by weight with a similar duration; at these relative quantities, they could distinguish the drugs subjectively but had no preference. Heroin was also found to be twice as potent as morphine in surgical anesthesia. Morphine is converted into heroin by a simple chemical reaction with acetic anhydride, followed by purification. Especially in Mexican production, opium may be converted directly to "black tar heroin" in a simplified procedure. This form predominates in the U.S. west of the Mississippi. Relative to other preparations of heroin, it has been associated with a dramatically decreased rate of HIV transmission among intravenous drug users (4 percent in Los Angeles vs. 40 percent in New York) due to technical requirements of injection, although it is also associated with greater risk of venous sclerosis and necrotizing fasciitis. Illegal production Afghanistan was formerly the primary producer of the drug. Having regularly producing 70 percent of the world's opium, Afghanistan decreased production to 74 tons per year under a ban by the Taliban in 2000, a move which cut production by 94 percent. A year later, after American and British troops invaded Afghanistan, removed the Taliban and installed the interim government, the land under cultivation leapt back to , with Afghanistan supplanting Burma to become the world's largest opium producer once more. Opium production increased rapidly in Afghanistan from that point, reaching an all-time high in 2006. According to DEA statistics, Afghanistan's production of oven-dried opium increased to 1,278 tons in 2002, more than doubled by 2003, and nearly doubled again during 2004. In late 2004, the U.S. government estimated that 206,000 hectares were under poppy cultivation, 4.5 percent of the country's total cropland, and produced 4,200 metric tons of opium, 76 percent of the world's supply, yielding 60 percent of Afghanistan's gross domestic product. In 2006, the UN Office on Drugs and Crime estimated production to have risen 59 percent to in cultivation, yielding 6,100 tons of opium, 82 percent of the world's supply. The value of the resulting heroin was estimated at , of which Afghan farmers were estimated to have received in revenue. For farmers, the crop can be up to ten times more profitable than wheat. The price of opium is around per kilo. Opium production has led to rising tensions in Afghan villages. Though direct conflict has yet to occur, the opinions of the new class of young rich men involved in the opium trade are at odds with those of the traditional village leaders. An increasingly large fraction of opium is processed into morphine base and heroin in drug labs in Afghanistan. Despite an international set of chemical controls designed to restrict availability of acetic anhydride, it enters the country, perhaps through its Central Asian neighbors which do not participate. A counternarcotics law passed in December 2005 requires Afghanistan to develop registries or regulations for tracking, storing, and owning acetic anhydride. In November 2023, a U.N report showed that in the entirety of Afghanistan, poppy cultivation dropped by over 95%, removing it from its place as being the world's largest opium producer. Besides Afghanistan, smaller quantities of opium are produced in Pakistan, the Golden Triangle region of Southeast Asia (particularly Burma), Colombia, Guatemala, and Mexico. Chinese production mainly trades with and profits from North America. In 2002, they were seeking to expand through eastern United States. In the post 9/11 era, trading between borders became difficult and because new international laws were set into place, the opium trade became more diffused. Power shifted from remote to high-end smugglers and opium traders. Outsourcing became a huge factor for survival for many smugglers and opium farmers. In 2023 Burma overtook Afghanistan and became the world's largest producer of opium, producing 1080 metric tones according the UN Southeast Asia Opium Survey report. Legal production Legal opium production is allowed under the United Nations Single Convention on Narcotic Drugs and other international drug treaties, subject to strict supervision by the law enforcement agencies of individual countries. The leading legal production method is the Robertson-Gregory process, whereby the entire poppy, excluding roots and leaves, is mashed and stewed in dilute acid solutions. The alkaloids are then recovered via acid-base extraction and purified. The exact date of its discovery is unknown, but it was described by Wurtz in his Dictionnaire de chimie pure et appliquée published in 1868. Legal opium production in India is much more traditional. As of 2008, opium was collected by farmers who were licensed to grow of opium poppies, who to maintain their licences needed to sell 56 kilograms of unadulterated raw opium paste. The price of opium paste is fixed by the government according to the quality and quantity tendered. The average is around 1500 rupees () per kilogram. Some additional money is made by drying the poppy heads and collecting poppy seeds, and a small fraction of opium beyond the quota may be consumed locally or diverted to the black market. The opium paste is dried and processed into government opium and alkaloid factories before it is packed into cases of 60 kilograms for export. Purification of chemical constituents is done in India for domestic production, but typically done abroad by foreign importers. Legal opium importation from India and Turkey is conducted by Mallinckrodt, Noramco, Abbott Laboratories, Purdue Pharma, and Cody Laboratories Inc. in the United States, and legal opium production is conducted by GlaxoSmithKline, Johnson & Johnson, Johnson Matthey, and Mayne in Tasmania, Australia; Sanofi Aventis in France; Shionogi Pharmaceutical in Japan; and MacFarlan Smith in the United Kingdom. The UN treaty requires that every country submit annual reports to the International Narcotics Control Board, stating that year's actual consumption of many classes of controlled drugs as well as opioids and projecting required quantities for the next year. This is to allow trends in consumption to be monitored and production quotas allotted. In 2005, the European Senlis Council began developing a programme which hopes to solve the problems caused by the large quantity of opium produced illegally in Afghanistan, most of which is converted to heroin and smuggled for sale in Europe and the United States. This proposal is to license Afghan farmers to produce opium for the world pharmaceutical market, and thereby solve another problem, that of chronic underuse of potent analgesics where required within developing nations. Part of the proposal is to overcome the "80–20 rule" that requires the U.S. to purchase 80 percent of its legal opium from India and Turkey to include Afghanistan, by establishing a second-tier system of supply control that complements the current INCB regulated supply and demand system by providing poppy-based medicines to countries who cannot meet their demand under the current regulations. Senlis arranged a conference in Kabul that brought drug policy experts from around the world to meet with Afghan government officials to discuss internal security, corruption issues, and legal issues within Afghanistan. In June 2007, the council launched a "Poppy for Medicines" project that provides a technical blueprint for the implementation of an integrated control system within Afghan village-based poppy for medicine projects: the idea promotes the economic diversification by redirecting proceeds from the legal cultivation of poppy and production of poppy-based medicines. There has been criticism of the Senlis report findings by Macfarlan Smith, who argue that though they produce morphine in Europe, they were never asked to contribute to the report. Cultivation in the UK In late 2006, the British government permitted the pharmaceutical company MacFarlan Smith (a Johnson Matthey company) to cultivate opium poppies in England for medicinal reasons, after Macfarlan Smith's primary source, India, decided to increase the price of export opium latex. This move is well received by British farmers, with a major opium poppy field located in Didcot, England. The British government has contradicted the Home Office's suggestion that opium cultivation can be legalized in Afghanistan for exports to the United Kingdom, helping lower poverty and internal fighting while helping the NHS to meet the high demand for morphine and heroin. Opium poppy cultivation in the United Kingdom does not need a licence, but a licence is required for those wishing to extract opium for medicinal products. Consumption In the industrialized world, the United States is the world's biggest consumer of prescription opioids, with Italy being one of the lowest, because of tighter regulations on prescribing narcotics for pain relief. Most opium imported into the United States is broken down into its alkaloid constituents, and whether legal or illegal, most current drug use occurs with processed derivatives such as heroin rather than with unrefined opium. Intravenous injection of opiates is most used: by comparison with injection, "dragon chasing" (heating of heroin on a piece of foil), and madak and "ack ack" (smoking of cigarettes containing tobacco mixed with heroin powder) are only 40 percent and 20 percent efficient, respectively. One study of British heroin addicts found a 12-fold excess mortality ratio (1.8 percent of the group dying per year). Most heroin deaths result not from overdose per se, but combination with other depressant drugs such as alcohol or benzodiazepines. The smoking of opium does not involve the burning of the material as might be imagined. Rather, the prepared opium is indirectly heated to temperatures at which the active alkaloids, chiefly morphine, are vaporized. In the past, smokers would use a specially designed opium pipe which had a removable knob-like pipe-bowl of fired earthenware attached by a metal fitting to a long, cylindrical stem. A small "pill" of opium about the size of a pea would be placed on the pipe-bowl, which was then heated by holding it over an opium lamp, a special oil lamp with a distinct funnel-like chimney to channel heat into a small area. The smoker would lie on his or her side in order to guide the pipe-bowl and the tiny pill of opium over the stream of heat rising from the chimney of the oil lamp and inhale the vaporized opium fumes as needed. Several pills of opium were smoked at a single session depending on the smoker's tolerance to the drug. The effects could last up to twelve hours. In Eastern culture, opium is more commonly used in the form of paregoric to treat diarrhea. This is a weaker solution than laudanum, an alcoholic tincture which was prevalently used as a pain medication and sleeping aid. Tincture of opium has been prescribed for, among other things, severe diarrhea. Taken thirty minutes prior to meals, it significantly slows intestinal motility, giving the intestines greater time to absorb fluid in the stool. Despite the historically negative view of opium as a cause of addiction, the use of morphine and other derivatives isolated from opium in the treatment of chronic pain has been reestablished. If given in controlled doses, modern opiates can be an effective treatment for neuropathic pain and other forms of chronic pain. Chemical and physiological properties Opium contains two main groups of alkaloids. Phenanthrenes such as morphine, codeine, and thebaine are the main psychoactive constituents. Isoquinolines such as papaverine and noscapine have no significant central nervous system effects. Morphine is the most prevalent and important alkaloid in opium, consisting of 10–16 percent of the total, and is responsible for most of its harmful effects such as lung edema, respiratory difficulties, coma, or cardiac or respiratory collapse. Morphine binds to and activates mu opioid receptors in the brain, spinal cord, stomach and intestine. Regular use can lead to drug tolerance or physical dependence. Chronic opium addicts in 1906 China consumed an average of eight grams of opium daily; opium addicts in modern Iran are thought to consume about the same. Both analgesia and drug addiction are functions of the mu opioid receptor, the class of opioid receptor first identified as responsive to morphine. Tolerance is associated with the superactivation of the receptor, which may be affected by the degree of endocytosis caused by the opioid administered, and leads to a superactivation of cyclic AMP signaling. Long-term use of morphine in palliative care and the management of chronic pain always entails a risk that the patient develops tolerance or physical dependence. There are many kinds of rehabilitation treatment, including pharmacologically based treatments with naltrexone, methadone, or ibogaine. In 2021, the International Agency for Research on Cancer concluded that opium is a Group 1 (sufficient evidence) human carcinogen, causing cancers of the larynx, lung, and urinary bladder. Slang terms Some slang terms for opium include: "Big O", "Shanghai Sally", "dope", "hop", "midnight oil", "O.P.", and "tar". "Dope" and "tar" can also refer to heroin. The traditional opium pipe is known as a "dream stick." The term dope entered the English language in the early nineteenth century, originally referring to viscous liquids, particularly sauces or gravy.  It has been used to refer to opiates since at least 1888, and this usage arose because opium, when prepared for smoking, is viscous.
Biology and health sciences
Drugs and pharmacology
null
22718
https://en.wikipedia.org/wiki/Ozone
Ozone
Ozone () (or trioxygen) is an inorganic molecule with the chemical formula . It is a pale blue gas with a distinctively pungent smell. It is an allotrope of oxygen that is much less stable than the diatomic allotrope , breaking down in the lower atmosphere to (dioxygen). Ozone is formed from dioxygen by the action of ultraviolet (UV) light and electrical discharges within the Earth's atmosphere. It is present in very low concentrations throughout the atmosphere, with its highest concentration high in the ozone layer of the stratosphere, which absorbs most of the Sun's ultraviolet (UV) radiation. Ozone's odor is reminiscent of chlorine, and detectable by many people at concentrations of as little as in air. Ozone's O3 structure was determined in 1865. The molecule was later proven to have a bent structure and to be weakly diamagnetic. In standard conditions, ozone is a pale blue gas that condenses at cryogenic temperatures to a dark blue liquid and finally a violet-black solid. Ozone's instability with regard to more common dioxygen is such that both concentrated gas and liquid ozone may decompose explosively at elevated temperatures, physical shock, or fast warming to the boiling point. It is therefore used commercially only in low concentrations. Ozone is a powerful oxidant (far more so than dioxygen) and has many industrial and consumer applications related to oxidation. This same high oxidizing potential, however, causes ozone to damage mucous and respiratory tissues in animals, and also tissues in plants, above concentrations of about . While this makes ozone a potent respiratory hazard and pollutant near ground level, a higher concentration in the ozone layer (from two to eight ppm) is beneficial, preventing damaging UV light from reaching the Earth's surface. Nomenclature The trivial name ozone is the most commonly used and preferred IUPAC name. The systematic names 2λ4-trioxidiene and catena-trioxygen, valid IUPAC names, are constructed according to the substitutive and additive nomenclatures, respectively. The name ozone derives from ozein (ὄζειν), the Greek neuter present participle for smell, referring to ozone's distinctive smell. In appropriate contexts, ozone can be viewed as trioxidane with two hydrogen atoms removed, and as such, trioxidanylidene may be used as a systematic name, according to substitutive nomenclature. By default, these names pay no regard to the radicality of the ozone molecule. In an even more specific context, this can also name the non-radical singlet ground state, whereas the diradical state is named trioxidanediyl. Trioxidanediyl (or ozonide) is used, non-systematically, to refer to the substituent group (-OOO-). Care should be taken to avoid confusing the name of the group for the context-specific name for the ozone given above. History In 1785, Dutch chemist Martinus van Marum was conducting experiments involving electrical sparking above water when he noticed an unusual smell, which he attributed to the electrical reactions, failing to realize that he had in fact created ozone. A half century later, Christian Friedrich Schönbein noticed the same pungent odour and recognized it as the smell often following a bolt of lightning. In 1839, he succeeded in isolating the gaseous chemical and named it "ozone", from the Greek word () meaning "to smell". For this reason, Schönbein is generally credited with the discovery of ozone. He also noted the similarity of ozone smell to the smell of phosphorus, and in 1844 proved that the product of reaction of white phosphorus with air is identical. A subsequent effort to call ozone "electrified oxygen" he ridiculed by proposing to call the ozone from white phosphorus "phosphorized oxygen". The formula for ozone, O3, was not determined until 1865 by Jacques-Louis Soret and confirmed by Schönbein in 1867. For much of the second half of the 19th century and well into the 20th, ozone was considered a healthy component of the environment by naturalists and health-seekers. Beaumont, California, had as its official slogan "Beaumont: Zone of Ozone", as evidenced on postcards and Chamber of Commerce letterhead. Naturalists working outdoors often considered the higher elevations beneficial because of their ozone content which was readily monitored. "There is quite a different atmosphere [at higher elevation] with enough ozone to sustain the necessary energy [to work]", wrote naturalist Henry Henshaw, working in Hawaii. Seaside air was considered to be healthy because of its believed ozone content. The smell giving rise to this belief is in fact that of halogenated seaweed metabolites and dimethyl sulfide. Much of ozone's appeal seems to have resulted from its "fresh" smell, which evoked associations with purifying properties. Scientists noted its harmful effects. In 1873 James Dewar and John Gray McKendrick documented that frogs grew sluggish, birds gasped for breath, and rabbits' blood showed decreased levels of oxygen after exposure to "ozonized air", which "exercised a destructive action". Schönbein himself reported that chest pains, irritation of the mucous membranes and difficulty breathing occurred as a result of inhaling ozone, and small mammals died. In 1911, Leonard Hill and Martin Flack stated in the Proceedings of the Royal Society B that ozone's healthful effects "have, by mere iteration, become part and parcel of common belief; and yet exact physiological evidence in favour of its good effects has been hitherto almost entirely wanting ... The only thoroughly well-ascertained knowledge concerning the physiological effect of ozone, so far attained, is that it causes irritation and œdema of the lungs, and death if inhaled in relatively strong concentration for any time." During World War I, ozone was tested at Queen Alexandra Military Hospital in London as a possible disinfectant for wounds. The gas was applied directly to wounds for as long as 15 minutes. This resulted in damage to both bacterial cells and human tissue. Other sanitizing techniques, such as irrigation with antiseptics, were found preferable. Until the 1920s, it was not certain whether small amounts of oxozone, , were also present in ozone samples due to the difficulty of applying analytical chemistry techniques to the explosive concentrated chemical. In 1923, Georg-Maria Schwab (working for his doctoral thesis under Ernst Hermann Riesenfeld) was the first to successfully solidify ozone and perform accurate analysis which conclusively refuted the oxozone hypothesis. Further hitherto unmeasured physical properties of pure concentrated ozone were determined by the Riesenfeld group in the 1920s. Physical properties Ozone is a colourless or pale blue gas, slightly soluble in water and much more soluble in inert non-polar solvents such as carbon tetrachloride or fluorocarbons, in which it forms a blue solution. At , it condenses to form a dark blue liquid. It is dangerous to allow this liquid to warm to its boiling point, because both concentrated gaseous ozone and liquid ozone can detonate. At temperatures below , it forms a violet-black solid. Ozone has a very specific sharp odour somewhat resembling chlorine bleach. Most people can detect it at the 0.01 μmol/mol level in air. Exposure of 0.1 to 1 μmol/mol produces headaches, burning eyes and causing irritation to the respiratory passages. Even low concentrations of ozone in air are very destructive to organic materials such as latex, plastics and animal lung tissue. The ozone molecule is diamagnetic. Structure According to experimental evidence from microwave spectroscopy, ozone is a bent molecule, with C2v symmetry (similar to the water molecule). The O–O distances are . The O–O–O angle is 116.78°. The central atom is sp² hybridized with one lone pair. Ozone is a polar molecule with a dipole moment of 0.53 D. The molecule can be represented as a resonance hybrid with two contributing structures, each with a single bond on one side and double bond on the other. The arrangement possesses an overall bond order of 1.5 for both sides. It is isoelectronic with the nitrite anion. Naturally occurring ozone can be composed of substituted isotopes (16O, 17O, 18O). A cyclic form has been predicted but not observed. Reactions Ozone is among the most powerful oxidizing agents known, far stronger than . It is also unstable at high concentrations, decaying into ordinary diatomic oxygen. Its half-life varies with atmospheric conditions such as temperature, humidity, and air movement. Under laboratory conditions, the half-life will average ~1500 minutes (25 hours) in still air at room temperature (24 °C), zero humidity with zero air changes per hour. 2 O3 -> 3 O2 This reaction proceeds more rapidly with increasing temperature. Deflagration of ozone can be triggered by a spark and can occur in ozone concentrations of 10 wt% or higher. Ozone can also be produced from oxygen at the anode of an electrochemical cell. This reaction can create smaller quantities of ozone for research purposes. This can be observed as an unwanted reaction in a Hoffman gas apparatus during the electrolysis of water when the voltage is set above the necessary voltage. With metals Ozone will oxidize most metals (except gold, platinum, and iridium) to oxides of the metals in their highest oxidation state. For example: With nitrogen and carbon compounds Ozone also oxidizes nitric oxide to nitrogen dioxide: NO + O3 -> NO2 + O2 This reaction is accompanied by chemiluminescence. The can be further oxidized to nitrate radical: NO2 + O3 -> NO3 + O2 The formed can react with to form dinitrogen pentoxide (). Solid nitronium perchlorate can be made from , and gases: NO2 + ClO2 + 2 O3 -> NO2ClO4 + 2 O2 Ozone does not react with ammonium salts, but it oxidizes ammonia to ammonium nitrate: 2 NH3 + 4 O3 -> NH4NO3 + 4 O2 + H2O Ozone reacts with carbon to form carbon dioxide, even at room temperature: C + 2 O3 -> CO2 + 2 O2 With sulfur compounds Ozone oxidizes sulfides to sulfates. For example, lead(II) sulfide is oxidized to lead(II) sulfate: PbS + 4 O3 -> PbSO4 + 4 O2 Sulfuric acid can be produced from ozone, water and either elemental sulfur or sulfur dioxide: In the gas phase, ozone reacts with hydrogen sulfide to form sulfur dioxide: H2S + O3 -> SO2 + H2O In an aqueous solution, however, two competing simultaneous reactions occur, one to produce elemental sulfur, and one to produce sulfuric acid: With alkenes and alkynes Alkenes can be oxidatively cleaved by ozone, in a process called ozonolysis, giving alcohols, aldehydes, ketones, and carboxylic acids, depending on the second step of the workup. Ozone can also cleave alkynes to form an acid anhydride or diketone product. If the reaction is performed in the presence of water, the anhydride hydrolyzes to give two carboxylic acids. Usually ozonolysis is carried out in a solution of dichloromethane, at a temperature of −78 °C. After a sequence of cleavage and rearrangement, an organic ozonide is formed. With reductive workup (e.g. zinc in acetic acid or dimethyl sulfide), ketones and aldehydes will be formed, with oxidative workup (e.g. aqueous or alcoholic hydrogen peroxide), carboxylic acids will be formed. Other substrates All three atoms of ozone may also react, as in the reaction of tin(II) chloride with hydrochloric acid and ozone: 3 SnCl2 + 6 HCl + O3 -> 3 SnCl4 + 3 H2O Iodine perchlorate can be made by treating iodine dissolved in cold anhydrous perchloric acid with ozone: I2 + 6 HClO4 + O3 -> 2 I(ClO4)3 + 3 H2O Ozone could also react with potassium iodide to give oxygen and iodine gas that can be titrated for quantitative determination: 2KI + O3 + H2O -> 2KOH + O2 + I2 Combustion Ozone can be used for combustion reactions and combustible gases; ozone provides higher temperatures than burning in dioxygen (). The following is a reaction for the combustion of carbon subnitride which can also cause higher temperatures: 3 C4N2 + 4 O3 -> 12 CO + 3 N2 Ozone can react at cryogenic temperatures. At , atomic hydrogen reacts with liquid ozone to form a hydrogen superoxide radical, which dimerizes: Ozone decomposition Types of ozone decomposition Ozone is a toxic substance, commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers...) and its catalytic decomposition is very important to reduce pollution. This type of decomposition is the most widely used, especially with solid catalysts, and it has many advantages such as a higher conversion with a lower temperature. Furthermore, the product and the catalyst can be instantaneously separated, and this way the catalyst can be easily recovered without using any separation operation. Moreover, the most used materials in the catalytic decomposition of ozone in the gas phase are noble metals like Pt, Rh or Pd and transition metals such as Mn, Co, Cu, Fe, Ni or Ag. There are two other possibilities for the ozone decomposition in gas phase: The first one is a thermal decomposition where the ozone can be decomposed using only the action of heat. The problem is that this type of decomposition is very slow with temperatures below 250 °C. However, the decomposition rate can be increased working with higher temperatures but this would involve a high energy cost. The second one is a photochemical decomposition, which consists of radiating ozone with ultraviolet radiation (UV) and it gives rise to oxygen and radical peroxide. Kinetics of ozone decomposition into molecular oxygen The process of ozone decomposition is a complex reaction involving two elementary reactions that finally lead to molecular oxygen, and this means that the reaction order and the rate law cannot be determined by the stoichiometry of the fitted equation. Overall reaction: 2 O3 -> 3 O2 Rate law (observed): It has been determined that the ozone decomposition follows a first order kinetics, and from the rate law above it can be determined that the partial order respect to molecular oxygen is -1 and respect to ozone is 2, therefore the global reaction order is 1. The ozone decomposition consists of two elementary steps: The first one corresponds to a unimolecular reaction because one only molecule of ozone decomposes into two products (molecular oxygen and oxygen). Then, the oxygen from the first step is an intermediate because it participates as a reactant in the second step, which is a bimolecular reaction because there are two different reactants (ozone and oxygen) that give rise to one product, that corresponds to molecular oxygen in the gas phase. Step 1: Unimolecular reaction    O3 -> O2 + O Step 2: Bimolecular reaction     O3 + O -> 2 O2 These two steps have different reaction rates, the first one is reversible and faster than the second reaction, which is slower, so this means that the determining step is the second reaction and this is used to determine the observed reaction rate. The reaction rate laws for every step are the ones that follow: The following mechanism allows to explain the rate law of the ozone decomposition observed experimentally, and also it allows to determine the reaction orders with respect to ozone and oxygen, with which the overall reaction order will be determined. The slower step, the bimolecular reaction, is the one that determines the rate of product formation, and considering that this step gives rise to two oxygen molecules the rate law has this form: However, this equation depends on the concentration of oxygen (intermediate), which can be determined considering the first step. Since the first step is faster and reversible and the second step is slower, the reactants and products from the first step are in equilibrium, so the concentration of the intermediate can be determined as follows: Then using these equations, the formation rate of molecular oxygen is as shown below: Finally, the mechanism presented allows to establish the rate observed experimentally, with a rate constant () and corresponding to a first order kinetics, as follows: where Reduction to ozonides Reduction of ozone gives the ozonide anion, . Derivatives of this anion are explosive and must be stored at cryogenic temperatures. Ozonides for all the alkali metals are known. , and can be prepared from their respective superoxides: KO2 + O3 -> KO3 + O2 Although KO3 can be formed as above, it can also be formed from potassium hydroxide and ozone: 2 KOH + 5 O3 -> 2 KO3 + 5 O2 + H2O and must be prepared by action of in liquid on an ion-exchange resin containing or ions: CsO3 + Na+ -> Cs+ + NaO3 A solution of calcium in ammonia reacts with ozone to give ammonium ozonide and not calcium ozonide: Applications Ozone can be used to remove iron and manganese from water, forming a precipitate which can be filtered: Ozone will also oxidize dissolved hydrogen sulfide in water to sulfurous acid: 3 O3 + H2S -> H2SO3 + 3 O2 These three reactions are central in the use of ozone-based well water treatment. Ozone will also detoxify cyanides by converting them to cyanates. CN- + O3 -> CNO- + O2 Ozone will also completely decompose urea: (NH2)2CO + O3 -> N2 + CO2 + 2 H2O Spectroscopic properties Ozone is a bent triatomic molecule with three vibrational modes: the symmetric stretch (1103.157 cm−1), bend (701.42 cm−1) and antisymmetric stretch (1042.096 cm−1). The symmetric stretch and bend are weak absorbers, but the antisymmetric stretch is strong and responsible for ozone being an important minor greenhouse gas. This IR band is also used to detect ambient and atmospheric ozone although UV-based measurements are more common. The electromagnetic spectrum of ozone is quite complex. An overview can be seen at the MPI Mainz UV/VIS Spectral Atlas of Gaseous Molecules of Atmospheric Interest. All of the bands are dissociative, meaning that the molecule falls apart to after absorbing a photon. The most important absorption is the Hartley band, extending from slightly above 300 nm down to slightly above 200 nm. It is this band that is responsible for absorbing UV C in the stratosphere. On the high wavelength side, the Hartley band transitions to the so-called Huggins band, which falls off rapidly until disappearing by ~360 nm. Above 400 nm, extending well out into the NIR, are the Chappius and Wulf bands. There, unstructured absorption bands are useful for detecting high ambient concentrations of ozone, but are so weak that they do not have much practical effect. There are additional absorption bands in the far UV, which increase slowly from 200 nm down to reaching a maximum at ~120 nm. Ozone in Earth's atmosphere The standard way to express total ozone levels (the amount of ozone in a given vertical column) in the atmosphere is by using Dobson units. Point measurements are reported as mole fractions in nmol/mol (parts per billion, ppb) or as concentrations in μg/m3. The study of ozone concentration in the atmosphere started in the 1920s. Ozone layer Location and production The highest levels of ozone in the atmosphere are in the stratosphere, in a region also known as the ozone layer between about 10 and 50 km above the surface (or between about 6 and 31 miles). However, even in this "layer", the ozone concentrations are only two to eight parts per million, so most of the oxygen there is dioxygen, O2, at about 210,000 parts per million by volume. Ozone in the stratosphere is mostly produced from short-wave ultraviolet rays between 240 and 160 nm. Oxygen starts to absorb weakly at 240 nm in the Herzberg bands, but most of the oxygen is dissociated by absorption in the strong Schumann–Runge bands between 200 and 160 nm where ozone does not absorb. While shorter wavelength light, extending to even the X-Ray limit, is energetic enough to dissociate molecular oxygen, there is relatively little of it, and, the strong solar emission at Lyman-alpha, 121 nm, falls at a point where molecular oxygen absorption is a minimum. The process of ozone creation and destruction is called the Chapman cycle and starts with the photolysis of molecular oxygen O2 -> [\ce{photon}] [(\ce{radiation}\ \lambda\ <\ 240\ \ce{nm})] 2O followed by reaction of the oxygen atom with another molecule of oxygen to form ozone. O + O2 + M -> O3 + M where "M" denotes the third body that carries off the excess energy of the reaction. The ozone molecule can then absorb a UV-C photon and dissociate The excess kinetic energy heats the stratosphere when the O atoms and the molecular oxygen fly apart and collide with other molecules. This conversion of UV light into kinetic energy warms the stratosphere. The oxygen atoms produced in the photolysis of ozone then react back with other oxygen molecule as in the previous step to form more ozone. In the clear atmosphere, with only nitrogen and oxygen, ozone can react with the atomic oxygen to form two molecules of : O3 + O -> 2 O2 An estimate of the rate of this termination step to the cycling of atomic oxygen back to ozone can be found simply by taking the ratios of the concentration of O2 to O3. The termination reaction is catalysed by the presence of certain free radicals, of which the most important are hydroxyl (OH), nitric oxide (NO) and atomic chlorine (Cl) and bromine (Br). In the second half of the 20th century, the amount of ozone in the stratosphere was discovered to be declining, mostly because of increasing concentrations of chlorofluorocarbons (CFC) and similar chlorinated and brominated organic molecules. The concern over the health effects of the decline led to the 1987 Montreal Protocol, the ban on the production of many ozone depleting chemicals and in the first and second decade of the 21st century the beginning of the recovery of stratospheric ozone concentrations. Importance to surface-dwelling life on Earth Ozone in the ozone layer filters out sunlight wavelengths from about 200 nm UV rays to 315 nm, with ozone peak absorption at about 250 nm. This ozone UV absorption is important to life, since it extends the absorption of UV by ordinary oxygen and nitrogen in air (which absorb all wavelengths < 200 nm) through the lower UV-C (200–280 nm) and the entire UV-B band (280–315 nm). The small unabsorbed part that remains of UV-B after passage through ozone causes sunburn in humans, and direct DNA damage in living tissues in both plants and animals. Ozone's effect on mid-range UV-B rays is illustrated by its effect on UV-B at 290 nm, which has a radiation intensity 350 million times as powerful at the top of the atmosphere as at the surface. Nevertheless, enough of UV-B radiation at similar frequency reaches the ground to cause some sunburn, and these same wavelengths are also among those responsible for the production of vitamin D in humans. The ozone layer has little effect on the longer UV wavelengths called UV-A (315–400 nm), but this radiation does not cause sunburn or direct DNA damage. While UV-A probably does cause long-term skin damage in certain humans, it is not as dangerous to plants and to the health of surface-dwelling organisms on Earth in general (see ultraviolet for more information on near ultraviolet). Low level ozone Low level ozone (or tropospheric ozone) is an atmospheric pollutant. It is not emitted directly by car engines or by industrial operations, but formed by the reaction of sunlight on air containing hydrocarbons and nitrogen oxides that react to form ozone directly at the source of the pollution or many kilometers downwind. Ozone reacts directly with some hydrocarbons such as aldehydes and thus begins their removal from the air, but the products are themselves key components of smog. Ozone photolysis by UV light leads to production of the hydroxyl radical HO• and this plays a part in the removal of hydrocarbons from the air, but is also the first step in the creation of components of smog such as peroxyacyl nitrates, which can be powerful eye irritants. The atmospheric lifetime of tropospheric ozone is about 22 days; its main removal mechanisms are being deposited to the ground, the above-mentioned reaction giving HO•, and by reactions with OH and the peroxy radical HO2•. There is evidence of significant reduction in agricultural yields because of increased ground-level ozone and pollution which interferes with photosynthesis and stunts overall growth of some plant species. The United States Environmental Protection Agency (EPA) has proposed a secondary regulation to reduce crop damage, in addition to the primary regulation designed for the protection of human health. Low level ozone in urban areas Certain examples of cities with elevated ozone readings are Denver, Colorado; Houston, Texas; and Mexico City, Mexico. Houston has a reading of around 41 nmol/mol, while Mexico City is far more hazardous, with a reading of about 125 nmol/mol. Low level ozone, or tropospheric ozone, is the most concerning type of ozone pollution in urban areas and is increasing in general. Ozone pollution in urban areas affects denser populations, and is worsened by high populations of vehicles, which emit pollutants NO2 and VOCs, the main contributors to problematic ozone levels. Ozone pollution in urban areas is especially concerning with increasing temperatures, raising heat-related mortality during heat waves. During heat waves in urban areas, ground level ozone pollution can be 20% higher than usual. Ozone pollution in urban areas reaches higher levels of exceedance in the summer and autumn, which may be explained by weather patterns and traffic patterns. People experiencing poverty are more affected by pollution in general, even though these populations are less likely to be contributing to pollution levels. As mentioned above, Denver, Colorado, is one of the many cities in the U.S. that have high amounts of ozone. According to the American Lung Association, the Denver–Aurora area is the 14th most ozone-polluted area in the U.S. The problem of high ozone levels is not new to this area. In 2004, the EPA allotted the Denver Metro/North Front Range as non-attainment areas per 1997's 8-hour ozone standard, but later deferred this status until 2007. The non-attainment standard indicates that an area does not meet the EPA's air quality standards. The Colorado Ozone Action Plan was created in response, and numerous changes were implemented from this plan. The first major change was that car emission testing was expanded across the state to more counties that did not previously mandate emissions testing, like areas of Larimer and Weld County. There have also been changes made to decrease Nitrogen Oxides (NOx) and Volatile Organic Compound (VOC) emissions, which should help lower ozone levels. One large contributor to high ozone levels in the area is the oil and natural gas industry situated in the Denver-Julesburg Basin (DJB) which overlaps with a majority of Colorado's metropolitan areas. Ozone is created naturally in the Earth's stratosphere, but is also created in the troposphere from human efforts. Briefly mentioned above, NOx and VOCs react with sunlight to create ozone through a process called photochemistry. One hour elevated ozone events (<75 ppb) "occur during June–August indicating that elevated ozone levels are driven by regional photochemistry". According to an article from the University of Colorado-Boulder, "Oil and natural gas VOC emission have a major role in ozone production and bear the potential to contribute to elevated O3 levels in the Northern Colorado Front Range (NCFR)". Using complex analyses to research wind patterns and emissions from large oil and natural gas operations, the authors concluded that "elevated O3 levels in the NCFR are predominantly correlated with air transport from N– ESE, which are the upwind sectors where the O&NG operations in the Wattenberg Field area of the DJB are located". Contained in the Colorado Ozone Action Plan, created in 2008, plans exist to evaluate "emission controls for large industrial sources of NOx" and "statewide control requirements for new oil and gas condensate tanks and pneumatic valves". In 2011, the Regional Haze Plan was released that included a more specific plan to help decrease NOx emissions. These efforts are increasingly difficult to implement and take many years to come to pass. Of course there are also other reasons that ozone levels remain high. These include: a growing population meaning more car emissions, and the mountains along the NCFR that can trap emissions. If interested, daily air quality readings can be found at the Colorado Department of Public Health and Environment's website. As noted earlier, Denver continues to experience high levels of ozone to this day. It will take many years and a systems-thinking approach to combat this issue of high ozone levels in the Front Range of Colorado. Ozone cracking Ozone gas attacks any polymer possessing olefinic or double bonds within its chain structure, such as natural rubber, nitrile rubber, and styrene-butadiene rubber. Products made using these polymers are especially susceptible to attack, which causes cracks to grow longer and deeper with time, the rate of crack growth depending on the load carried by the rubber component and the concentration of ozone in the atmosphere. Such materials can be protected by adding antiozonants, such as waxes, which bond to the surface to create a protective film or blend with the material and provide long term protection. Ozone cracking used to be a serious problem in car tires, for example, but it is not an issue with modern tires. On the other hand, many critical products, like gaskets and O-rings, may be attacked by ozone produced within compressed air systems. Fuel lines made of reinforced rubber are also susceptible to attack, especially within the engine compartment, where some ozone is produced by electrical components. Storing rubber products in close proximity to a DC electric motor can accelerate ozone cracking. The commutator of the motor generates sparks which in turn produce ozone. Ozone as a greenhouse gas Although ozone was present at ground level before the Industrial Revolution, peak concentrations are now far higher than the pre-industrial levels, and even background concentrations well away from sources of pollution are substantially higher. Ozone acts as a greenhouse gas, absorbing some of the infrared energy emitted by the earth. Quantifying the greenhouse gas potency of ozone is difficult because it is not present in uniform concentrations across the globe. However, the most widely accepted scientific assessments relating to climate change (e.g. the Intergovernmental Panel on Climate Change Third Assessment Report) suggest that the radiative forcing of tropospheric ozone is about 25% that of carbon dioxide. The annual global warming potential of tropospheric ozone is between 918 and 1022 tons carbon dioxide equivalent/tons tropospheric ozone. This means on a per-molecule basis, ozone in the troposphere has a radiative forcing effect roughly 1,000 times as strong as carbon dioxide. However, tropospheric ozone is a short-lived greenhouse gas, which decays in the atmosphere much more quickly than carbon dioxide. This means that over a 20-year span, the global warming potential of tropospheric ozone is much less, roughly 62 to 69 tons carbon dioxide equivalent / ton tropospheric ozone. Because of its short-lived nature, tropospheric ozone does not have strong global effects, but has very strong radiative forcing effects on regional scales. In fact, there are regions of the world where tropospheric ozone has a radiative forcing up to 150% of carbon dioxide. For example, ozone increase in the troposphere is shown to be responsible for ~30% of upper Southern Ocean interior warming between 1955 and 2000. Health effects For the last few decades, scientists studied the effects of acute and chronic ozone exposure on human health. Hundreds of studies suggest that ozone is harmful to people at levels currently found in urban areas. Ozone has been shown to affect the respiratory, cardiovascular and central nervous system. Early death and problems in reproductive health and development are also shown to be associated with ozone exposure. Vulnerable populations The American Lung Association has identified five populations who are especially vulnerable to the effects of breathing ozone: Children and teens People 65 years old and older People who work or exercise outdoors People with existing lung diseases, such as asthma and chronic obstructive pulmonary disease (also known as COPD, which includes emphysema and chronic bronchitis) People with cardiovascular disease Additional evidence suggests that women, those with obesity and low-income populations may also face higher risk from ozone, although more research is needed. Acute ozone exposure Acute ozone exposure ranges from hours to a few days. Because ozone is a gas, it directly affects the lungs and the entire respiratory system. Inhaled ozone causes inflammation and acute—but reversible—changes in lung function, as well as airway hyperresponsiveness. These changes lead to shortness of breath, wheezing, and coughing which may exacerbate lung diseases, like asthma or chronic obstructive pulmonary disease (COPD) resulting in the need to receive medical treatment. Acute and chronic exposure to ozone has been shown to cause an increased risk of respiratory infections, due to the following mechanism. Multiple studies have been conducted to determine the mechanism behind ozone's harmful effects, particularly in the lungs. These studies have shown that exposure to ozone causes changes in the immune response within the lung tissue, resulting in disruption of both the innate and adaptive immune response, as well as altering the protective function of lung epithelial cells. It is thought that these changes in immune response and the related inflammatory response are factors that likely contribute to the increased risk of lung infections, and worsening or triggering of asthma and reactive airways after exposure to ground-level ozone pollution. The innate (cellular) immune system consists of various chemical signals and cell types that work broadly and against multiple pathogen types, typically bacteria or foreign bodies/substances in the host. The cells of the innate system include phagocytes, neutrophils, both thought to contribute to the mechanism of ozone pathology in the lungs, as the functioning of these cell types have been shown to change after exposure to ozone. Macrophages, cells that serve the purpose of eliminating pathogens or foreign material through the process of "phagocytosis", have been shown to change the level of inflammatory signals they release in response to ozone, either up-regulating and resulting in an inflammatory response in the lung, or down-regulating and reducing immune protection. Neutrophils, another important cell type of the innate immune system that primarily targets bacterial pathogens, are found to be present in the airways within 6 hours of exposure to high ozone levels. Despite high levels in the lung tissues, however, their ability to clear bacteria appears impaired by exposure to ozone. The adaptive immune system is the branch of immunity that provides long-term protection via the development of antibodies targeting specific pathogens and is also impacted by high ozone exposure. Lymphocytes, a cellular component of the adaptive immune response, produce an increased amount of inflammatory chemicals called "cytokines" after exposure to ozone, which may contribute to airway hyperreactivity and worsening asthma symptoms. The airway epithelial cells also play an important role in protecting individuals from pathogens. In normal tissue, the epithelial layer forms a protective barrier, and also contains specialized ciliary structures that work to clear foreign bodies, mucus and pathogens from the lungs. When exposed to ozone, the cilia become damaged and mucociliary clearance of pathogens is reduced. Furthermore, the epithelial barrier becomes weakened, allowing pathogens to cross the barrier, proliferate and spread into deeper tissues. Together, these changes in the epithelial barrier help make individuals more susceptible to pulmonary infections. Inhaling ozone not only affects the immune system and lungs, but it may also affect the heart as well. Ozone causes short-term autonomic imbalance leading to changes in heart rate and reduction in heart rate variability; and high levels exposure for as little as one-hour results in a supraventricular arrhythmia in the elderly, both increase the risk of premature death and stroke. Ozone may also lead to vasoconstriction resulting in increased systemic arterial pressure contributing to increased risk of cardiac morbidity and mortality in patients with pre-existing cardiac diseases. Chronic ozone exposure Breathing ozone for periods longer than eight hours at a time for weeks, months or years defines chronic exposure. Numerous studies suggest a serious impact on the health of various populations from this exposure. One study finds significant positive associations between chronic ozone and all-cause, circulatory, and respiratory mortality with 2%, 3%, and 12% increases in risk per 10 ppb and report an association (95% CI) of annual ozone and all-cause mortality with a hazard ratio of 1.02 (1.01–1.04), and with cardiovascular mortality of 1.03 (1.01–1.05). A similar study finds similar associations with all-cause mortality and even larger effects for cardiovascular mortality. An increased risk of mortality from respiratory causes is associated with long-term chronic exposure to ozone. Chronic ozone has detrimental effects on children, especially those with asthma. The risk for hospitalization in children with asthma increases with chronic exposure to ozone; younger children and those with low-income status are even at greater risk. Adults suffering from respiratory diseases (asthma, COPD, lung cancer) are at a higher risk of mortality and morbidity and critically ill patients have an increased risk of developing acute respiratory distress syndrome with chronic ozone exposure as well. Ozone produced by air cleaners Ozone generators sold as air cleaners intentionally produce the gas ozone. These are often marketed to control indoor air pollution, and use misleading terms to describe ozone. Some examples are describing it as "energized oxygen" or "pure air", suggesting that ozone is a healthy or "better" kind of oxygen. However, according to the EPA, "There is evidence to show that at concentrations that do not exceed public health standards, ozone is not effective at removing many odor-causing chemicals", and "If used at concentrations that do not exceed public health standards, ozone applied to indoor air does not effectively remove viruses, bacteria, mold, or other biological pollutants.". Furthermore, another report states that "results of some controlled studies show that concentrations of ozone considerably higher than these [human safety] standards are possible even when a user follows the manufacturer's operating instructions". The California Air Resources Board has a page listing air cleaners (many with ionizers) meeting their indoor ozone limit of 0.050 parts per million. From that article: All portable indoor air cleaning devices sold in California must be certified by the California Air Resources Board (CARB). To be certified, air cleaners must be tested for electrical safety and ozone emissions, and meet an ozone emission concentration limit of 0.050 parts per million. For more information about the regulation, visit the air cleaner regulation. Ozone air pollution Ozone precursors are a group of pollutants, predominantly those emitted during the combustion of fossil fuels. Ground-level ozone pollution (tropospheric ozone) is created near the Earth's surface by the action of daylight UV rays on these precursors. The ozone at ground level is primarily from fossil fuel precursors, but methane is a natural precursor, and the very low natural background level of ozone at ground level is considered safe. This section examines the health impacts of fossil fuel burning, which raises ground level ozone far above background levels. There is a great deal of evidence to show that ground-level ozone can harm lung function and irritate the respiratory system. Exposure to ozone (and the pollutants that produce it) is linked to premature death, asthma, bronchitis, heart attack, and other cardiopulmonary problems. Long-term exposure to ozone has been shown to increase risk of death from respiratory illness. A study of 450,000 people living in U.S. cities saw a significant correlation between ozone levels and respiratory illness over the 18-year follow-up period. The study revealed that people living in cities with high ozone levels, such as Houston or Los Angeles, had an over 30% increased risk of dying from lung disease. Air quality guidelines such as those from the World Health Organization, the U.S. Environmental Protection Agency (EPA), and the European Union are based on detailed studies designed to identify the levels that can cause measurable ill health effects. According to scientists with the EPA, susceptible people can be adversely affected by ozone levels as low as 40 nmol/mol. In the EU, the current target value for ozone concentrations is 120 μg/m3 which is about 60 nmol/mol. This target applies to all member states in accordance with Directive 2008/50/EC. Ozone concentration is measured as a maximum daily mean of 8 hour averages and the target should not be exceeded on more than 25 calendar days per year, starting from January 2010. Whilst the directive requires in the future a strict compliance with 120 μg/m3 limit (i.e. mean ozone concentration not to be exceeded on any day of the year), there is no date set for this requirement and this is treated as a long-term objective. In the US, the Clean Air Act directs the EPA to set National Ambient Air Quality Standards for several pollutants, including ground-level ozone, and counties out of compliance with these standards are required to take steps to reduce their levels. In May 2008, under a court order, the EPA lowered its ozone standard from 80 nmol/mol to 75 nmol/mol. The move proved controversial, since the Agency's own scientists and advisory board had recommended lowering the standard to 60 nmol/mol. Many public health and environmental groups also supported the 60 nmol/mol standard, and the World Health Organization recommends 100 μg/m3 (51 nmol/mol). On January 7, 2010, the U.S. Environmental Protection Agency (EPA) announced proposed revisions to the National Ambient Air Quality Standard (NAAQS) for the pollutant ozone, the principal component of smog: ... EPA proposes that the level of the 8-hour primary standard, which was set at 0.075 μmol/mol in the 2008 final rule, should instead be set at a lower level within the range of 0.060 to 0.070 μmol/mol, to provide increased protection for children and other at risk populations against an array of – related adverse health effects that range from decreased lung function and increased respiratory symptoms to serious indicators of respiratory morbidity including emergency department visits and hospital admissions for respiratory causes, and possibly cardiovascular-related morbidity as well as total non- accidental and cardiopulmonary mortality ... On October 26, 2015, the EPA published a final rule with an effective date of December 28, 2015, that revised the 8-hour primary NAAQS from 0.075 ppm to 0.070 ppm. The EPA has developed an air quality index (AQI) to help explain air pollution levels to the general public. Under the current standards, eight-hour average ozone mole fractions of 85 to 104 nmol/mol are described as "unhealthy for sensitive groups", 105 nmol/mol to 124 nmol/mol as "unhealthy", and 125 nmol/mol to 404 nmol/mol as "very unhealthy". Ozone can also be present in indoor air pollution, partly as a result of electronic equipment such as photocopiers. A connection has also been known to exist between the increased pollen, fungal spores, and ozone caused by thunderstorms and hospital admissions of asthma sufferers. In the Victorian era, one British folk myth held that the smell of the sea was caused by ozone. In fact, the characteristic "smell of the sea" is caused by dimethyl sulfide, a chemical generated by phytoplankton. Victorian Britons considered the resulting smell "bracing". Heat waves An investigation to assess the joint mortality effects of ozone and heat during the European heat waves in 2003, concluded that these appear to be additive. Physiology Ozone, along with reactive forms of oxygen such as superoxide, singlet oxygen, hydrogen peroxide, and hypochlorite ions, is produced by white blood cells and other biological systems (such as the roots of marigolds) as a means of destroying foreign bodies. Ozone reacts directly with organic double bonds. Also, when ozone breaks down to dioxygen it gives rise to oxygen free radicals, which are highly reactive and capable of damaging many organic molecules. Moreover, it is believed that the powerful oxidizing properties of ozone may be a contributing factor of inflammation. The cause-and-effect relationship of how the ozone is created in the body and what it does is still under consideration and still subject to various interpretations, since other body chemical processes can trigger some of the same reactions. There is evidence linking the antibody-catalyzed water-oxidation pathway of the human immune response to the production of ozone. In this system, ozone is produced by antibody-catalyzed production of trioxidane from water and neutrophil-produced singlet oxygen. When inhaled, ozone reacts with compounds lining the lungs to form specific, cholesterol-derived metabolites that are thought to facilitate the build-up and pathogenesis of atherosclerotic plaques (a form of heart disease). These metabolites have been confirmed as naturally occurring in human atherosclerotic arteries and are categorized into a class of secosterols termed atheronals, generated by ozonolysis of cholesterol's double bond to form a 5,6 secosterol as well as a secondary condensation product via aldolization. Impact on plant growth and crop yields Ozone has been implicated to have an adverse effect on plant growth: "... ozone reduced total chlorophylls, carotenoid and carbohydrate concentration, and increased 1-aminocyclopropane-1-carboxylic acid (ACC) content and ethylene production. In treated plants, the ascorbate leaf pool was decreased, while lipid peroxidation and solute leakage were significantly higher than in ozone-free controls. The data indicated that ozone triggered protective mechanisms against oxidative stress in citrus." Studies that have used pepper plants as a model have shown that ozone decreased fruit yield and changed fruit quality. Furthermore, it was also observed a decrease in chlorophylls levels and antioxidant defences on the leaves, as well as increased the reactive oxygen species (ROS) levels and lipid and protein damages. A 2022 study concludes that East Asia loses 63 billion dollars in crops per year due to ozone pollution, a byproduct of fossil fuel combustion. China loses about one-third of its potential wheat production and one-fourth of its rice production. Safety regulations Because of the strongly oxidizing properties of ozone, ozone is a primary irritant, affecting especially the eyes and respiratory systems and can be hazardous at even low concentrations. The Canadian Centre for Occupation Safety and Health reports that: Even very low concentrations of ozone can be harmful to the upper respiratory tract and the lungs. The severity of injury depends on both the concentration of ozone and the duration of exposure. Severe and permanent lung injury or death could result from even a very short-term exposure to relatively low concentrations." To protect workers potentially exposed to ozone, U.S. Occupational Safety and Health Administration has established a permissible exposure limit (PEL) of 0.1 μmol/mol (29 CFR 1910.1000 table Z-1), calculated as an 8-hour time weighted average. Higher concentrations are especially hazardous and NIOSH has established an Immediately Dangerous to Life and Health Limit (IDLH) of 5 μmol/mol. Work environments where ozone is used or where it is likely to be produced should have adequate ventilation and it is prudent to have a monitor for ozone that will alarm if the concentration exceeds the OSHA PEL. Continuous monitors for ozone are available from several suppliers. Elevated ozone exposure can occur on passenger aircraft, with levels depending on altitude and atmospheric turbulence. U.S. Federal Aviation Administration regulations set a limit of 250 nmol/mol with a maximum four-hour average of 100 nmol/mol. Some planes are equipped with ozone converters in the ventilation system to reduce passenger exposure. Production Ozone generators, or ozonators, are used to produce ozone for cleaning air or removing smoke odours in unoccupied rooms. These ozone generators can produce over 3 g of ozone per hour. Ozone often forms in nature under conditions where O2 will not react. Ozone used in industry is measured in μmol/mol (ppm, parts per million), nmol/mol (ppb, parts per billion), μg/m3, mg/h (milligrams per hour) or weight percent. The regime of applied concentrations ranges from 1% to 5% (in air) and from 6% to 14% (in oxygen) for older generation methods. New electrolytic methods can achieve up 20% to 30% dissolved ozone concentrations in output water. Temperature and humidity play a large role in how much ozone is being produced using traditional generation methods (such as corona discharge and ultraviolet light). Old generation methods will produce less than 50% of nominal capacity if operated with humid ambient air, as opposed to very dry air. New generators, using electrolytic methods, can achieve higher purity and dissolution through using water molecules as the source of ozone production. Coronal discharge method This is the most common type of ozone generator for most industrial and personal uses. While variations of the "hot spark" coronal discharge method of ozone production exist, including medical grade and industrial grade ozone generators, these units usually work by means of a corona discharge tube or ozone plate. They are typically cost-effective and do not require an oxygen source other than the ambient air to produce ozone concentrations of 3–6%. Fluctuations in ambient air, due to weather or other environmental conditions, cause variability in ozone production. However, they also produce nitrogen oxides as a by-product. Use of an air dryer can reduce or eliminate nitric acid formation by removing water vapor and increase ozone production. At room temperature, nitric acid will form into a vapour that is hazardous if inhaled. Symptoms can include chest pain, shortness of breath, headaches and a dry nose and throat causing a burning sensation. Use of an oxygen concentrator can further increase the ozone production and further reduce the risk of nitric acid formation by removing not only the water vapor, but also the bulk of the nitrogen. Ultraviolet light UV ozone generators, or vacuum-ultraviolet (VUV) ozone generators, employ a light source that generates a narrow-band ultraviolet light, a subset of that produced by the Sun. The Sun's UV sustains the ozone layer in the stratosphere of Earth. UV ozone generators use ambient air for ozone production, no air prep systems are used (air dryer or oxygen concentrator), therefore these generators tend to be less expensive. However, UV ozone generators usually produce ozone with a concentration of about 0.5% or lower which limits the potential ozone production rate. Another disadvantage of this method is that it requires the ambient air (oxygen) to be exposed to the UV source for a longer amount of time, and any gas that is not exposed to the UV source will not be treated. This makes UV generators impractical for use in situations that deal with rapidly moving air or water streams (in-duct air sterilization, for example). Production of ozone is one of the potential dangers of ultraviolet germicidal irradiation. VUV ozone generators are used in swimming pools and spa applications ranging to millions of gallons of water. VUV ozone generators, unlike corona discharge generators, do not produce harmful nitrogen by-products and also unlike corona discharge systems, VUV ozone generators work extremely well in humid air environments. There is also not normally a need for expensive off-gas mechanisms, and no need for air driers or oxygen concentrators which require extra costs and maintenance. Cold plasma In the cold plasma method, pure oxygen gas is exposed to a plasma created by DBD. The diatomic oxygen is split into single atoms, which then recombine in triplets to form ozone. It is common in the industry to mislabel some DBD ozone generators as CD Corona Discharge generators. Typically all solid flat metal electrode ozone generators produce ozone using the dielectric barrier discharge method. Cold plasma machines use pure oxygen as the input source and produce a maximum concentration of about 24% ozone. They produce far greater quantities of ozone in a given time compared to ultraviolet production that has about 2% efficiency. The discharges manifest as filamentary transfer of electrons (micro discharges) in a gap between two electrodes. In order to evenly distribute the micro discharges, a dielectric insulator must be used to separate the metallic electrodes and to prevent arcing. Electrolytic Electrolytic ozone generation (EOG) splits water molecules into H2, O2, and O3. In most EOG methods, the hydrogen gas will be removed to leave oxygen and ozone as the only reaction products. Therefore, EOG can achieve higher dissolution in water without other competing gases found in corona discharge method, such as nitrogen gases present in ambient air. This method of generation can achieve concentrations of 20–30% and is independent of air quality because water is used as the source material. Production of ozone electrolytically is typically unfavorable because of the high overpotential required to produce ozone as compared to oxygen. This is why ozone is not produced during typical water electrolysis. However, it is possible to increase the overpotential of oxygen by careful catalyst selection such that ozone is preferentially produced under electrolysis. Catalysts typically chosen for this approach are lead dioxide or boron-doped diamond. The ozone to oxygen ratio is improved by increasing current density at the anode, cooling the electrolyte around the anode close to 0 °C, using an acidic electrolyte (such as dilute sulfuric acid) instead of a basic solution, and by applying pulsed current instead of DC. Special considerations Ozone cannot be stored and transported like other industrial gases (because it quickly decays into diatomic oxygen) and must therefore be produced on site. Available ozone generators vary in the arrangement and design of the high-voltage electrodes. At production capacities higher than 20 kg per hour, a gas/water tube heat-exchanger may be utilized as ground electrode and assembled with tubular high-voltage electrodes on the gas-side. The regime of typical gas pressures is around absolute in oxygen and absolute in air. Several megawatts of electrical power may be installed in large facilities, applied as single phase AC current at 50 to 8000 Hz and peak voltages between 3,000 and 20,000 volts. Applied voltage is usually inversely related to the applied frequency. The dominating parameter influencing ozone generation efficiency is the gas temperature, which is controlled by cooling water temperature and/or gas velocity. The cooler the water, the better the ozone synthesis. The lower the gas velocity, the higher the concentration (but the lower the net ozone produced). At typical industrial conditions, almost 90% of the effective power is dissipated as heat and needs to be removed by a sufficient cooling water flow. Because of the high reactivity of ozone, only a few materials may be used like stainless steel (quality 316L), titanium, aluminium (as long as no moisture is present), glass, polytetrafluorethylene, or polyvinylidene fluoride. Viton may be used with the restriction of constant mechanical forces and absence of humidity (humidity limitations apply depending on the formulation). Hypalon may be used with the restriction that no water comes in contact with it, except for normal atmospheric levels. Embrittlement or shrinkage is the common mode of failure of elastomers with exposure to ozone. Ozone cracking is the common mode of failure of elastomer seals like O-rings. Silicone rubbers are usually adequate for use as gaskets in ozone concentrations below 1 wt%, such as in equipment for accelerated aging of rubber samples. Incidental production Ozone may be formed from by electrical discharges and by action of high energy electromagnetic radiation. Unsuppressed arcing in electrical contacts, motor brushes, or mechanical switches breaks down the chemical bonds of the atmospheric oxygen surrounding the contacts [ → 2O]. Free radicals of oxygen in and around the arc recombine to create ozone []. Certain electrical equipment generate significant levels of ozone. This is especially true of devices using high voltages, such as ionic air purifiers, laser printers, photocopiers, tasers and arc welders. Electric motors using brushes can generate ozone from repeated sparking inside the unit. Large motors that use brushes, such as those used by elevators or hydraulic pumps, will generate more ozone than smaller motors. Ozone is similarly formed in the Catatumbo lightning storms phenomenon on the Catatumbo River in Venezuela, though ozone's instability makes it dubious that it has any effect on the ozonosphere. It is the world's largest single natural generator of ozone, lending calls for it to be designated a UNESCO World Heritage Site. Laboratory production In the laboratory, ozone can be produced by electrolysis using a 9 volt battery, a pencil graphite rod cathode, a platinum wire anode and a 3 molar sulfuric acid electrolyte. The half cell reactions taking place are: where represents the standard electrode potential. In the net reaction, three equivalents of water are converted into one equivalent of ozone and three equivalents of hydrogen. Oxygen formation is a competing reaction. It can also be generated by a high voltage arc. In its simplest form, high voltage AC, such as the output of a neon-sign transformer is connected to two metal rods with the ends placed sufficiently close to each other to allow an arc. The resulting arc will convert atmospheric oxygen to ozone. It is often desirable to contain the ozone. This can be done with an apparatus consisting of two concentric glass tubes sealed together at the top with gas ports at the top and bottom of the outer tube. The inner core should have a length of metal foil inserted into it connected to one side of the power source. The other side of the power source should be connected to another piece of foil wrapped around the outer tube. A source of dry is applied to the bottom port. When high voltage is applied to the foil leads, electricity will discharge between the dry dioxygen in the middle and form and which will flow out the top port. This is called a Siemen's ozoniser. The reaction can be summarized as follows: 3O2 ->[\text{electricity}] 2O3 Applications Industry The largest use of ozone is in the preparation of pharmaceuticals, synthetic lubricants, and many other commercially useful organic compounds, where it is used to sever carbon-carbon bonds. It can also be used for bleaching substances and for killing microorganisms in air and water sources. Many municipal drinking water systems kill bacteria with ozone instead of the more common chlorine. Ozone has a very high oxidation potential. Ozone does not form organochlorine compounds, nor does it remain in the water after treatment. Ozone can form the suspected carcinogen bromate in source water with high bromide concentrations. The U.S. Safe Drinking Water Act mandates that these systems introduce an amount of chlorine to maintain a minimum of 0.2 μmol/mol residual free chlorine in the pipes, based on results of regular testing. Where electrical power is abundant, ozone is a cost-effective method of treating water, since it is produced on demand and does not require transportation and storage of hazardous chemicals. Once it has decayed, it leaves no taste or odour in drinking water. Although low levels of ozone have been advertised to be of some disinfectant use in residential homes, the concentration of ozone in dry air required to have a rapid, substantial effect on airborne pathogens exceeds safe levels recommended by the U.S. Occupational Safety and Health Administration and Environmental Protection Agency. Humidity control can vastly improve both the killing power of the ozone and the rate at which it decays back to oxygen (more humidity allows more effectiveness). Spore forms of most pathogens are very tolerant of atmospheric ozone in concentrations at which asthma patients start to have issues. In 1908 artificial ozonisation of the Central Line of the London Underground was introduced for aerial disinfection. The process was found to be worthwhile, but was phased out by 1956. However the beneficial effect was maintained by the ozone created incidentally from the electrical discharges of the train motors (see above: Incidental production). Ozone generators were made available to schools and universities in Wales for the Autumn term 2021, to disinfect classrooms after COVID-19 outbreaks. Industrially, ozone is used to: Disinfect laundry in hospitals, food factories, care homes etc.; Disinfect water in place of chlorine Deodorize air and objects, such as after a fire. This process is extensively used in fabric restoration Kill bacteria on food or on contact surfaces; Water intense industries such as breweries and dairy plants can make effective use of dissolved ozone as a replacement to chemical sanitizers such as peracetic acid, hypochlorite or heat. Disinfect cooling towers and control Legionella with reduced chemical consumption, water bleed-off and increased performance. Sanitize swimming pools and spas Kill insects in stored grain Scrub yeast and mold spores from the air in food processing plants; Wash fresh fruits and vegetables to kill yeast, mold and bacteria; Chemically attack contaminants in water (iron, arsenic, hydrogen sulfide, nitrites, and complex organics lumped together as "colour"); Provide an aid to flocculation (agglomeration of molecules, which aids in filtration, where the iron and arsenic are removed); Manufacture chemical compounds via chemical synthesis Clean and bleach fabrics (the former use is utilized in fabric restoration; the latter use is patented); Act as an antichlor in chlorine-based bleaching; Assist in processing plastics to allow adhesion of inks; Age rubber samples to determine the useful life of a batch of rubber; Eradicate water-borne parasites such as Giardia lamblia and Cryptosporidium in surface water treatment plants. Ozone is a reagent in many organic reactions in the laboratory and in industry. Ozonolysis is the cleavage of an alkene to carbonyl compounds. Many hospitals around the world use large ozone generators to decontaminate operating rooms between surgeries. The rooms are cleaned and then sealed airtight before being filled with ozone which effectively kills or neutralizes all remaining bacteria. Ozone is used as an alternative to chlorine or chlorine dioxide in the bleaching of wood pulp. It is often used in conjunction with oxygen and hydrogen peroxide to eliminate the need for chlorine-containing compounds in the manufacture of high-quality, white paper. Ozone can be used to detoxify cyanide wastes (for example from gold and silver mining) by oxidizing cyanide to cyanate and eventually to carbon dioxide. Water disinfection Since the invention of Dielectric Barrier Discharge (DBD) plasma reactors, it has been employed for water treatment with ozone. However, with cheaper alternative disinfectants like chlorine, such applications of DBD ozone water decontamination have been limited by high power consumption and bulky equipment. Despite this, with research revealing the negative impacts of common disinfectants like chlorine with respect to toxic residuals and ineffectiveness in killing certain micro-organisms, DBD plasma-based ozone decontamination is of interest in current available technologies. Although ozonation of water with a high concentration of bromide does lead to the formation of undesirable brominated disinfection byproducts, unless drinking water is produced by desalination, ozonation can generally be applied without concern for these byproducts. Advantages of ozone include high thermodynamic oxidation potential, less sensitivity to organic material and better tolerance for pH variations while retaining the ability to kill bacteria, fungi, viruses, as well as spores and cysts. Although, ozone has been widely accepted in Europe for decades, it is sparingly used for decontamination in the U.S. due to limitations of high-power consumption, bulky installation and stigma attached with ozone toxicity. Considering this, recent research efforts have been directed towards the study of effective ozone water treatment systems. Researchers have looked into lightweight and compact low power surface DBD reactors, energy efficient volume DBD reactors and low power micro-scale DBD reactors. Such studies can help pave the path to re-acceptance of DBD plasma-based ozone decontamination of water, especially in the U.S. Consumers Ozone levels which are safe for people are ineffective at killing fungi and bacteria. Some consumer disinfection and cosmetic products emit ozone at levels harmful to human health. Devices generating high levels of ozone, some of which use ionization, are used to sanitize and deodorize uninhabited buildings, rooms, ductwork, woodsheds, boats and other vehicles. Ozonated water is used to launder clothes and to sanitize food, drinking water, and surfaces in the home. According to the U.S. Food and Drug Administration (FDA), it is "amending the food additive regulations to provide for the safe use of ozone in gaseous and aqueous phases as an antimicrobial agent on food, including meat and poultry." Studies at California Polytechnic University demonstrated that 0.3 μmol/mol levels of ozone dissolved in filtered tapwater can produce a reduction of more than 99.99% in such food-borne microorganisms as salmonella, E. coli 0157:H7 and Campylobacter. This quantity is 20,000 times the WHO-recommended limits stated above. Ozone can be used to remove pesticide residues from fruits and vegetables. Ozone is used in homes and hot tubs to kill bacteria in the water and to reduce the amount of chlorine or bromine required by reactivating them to their free state. Since ozone does not remain in the water long enough, ozone by itself is ineffective at preventing cross-contamination among bathers and must be used in conjunction with halogens. Gaseous ozone created by ultraviolet light or by corona discharge is injected into the water. Ozone is also widely used in the treatment of water in aquariums and fishponds. Its use can minimize bacterial growth, control parasites, eliminate transmission of some diseases, and reduce or eliminate "yellowing" of the water. Ozone must not come in contact with fishes' gill structures. Natural saltwater (with life forms) provides enough "instantaneous demand" that controlled amounts of ozone activate bromide ions to hypobromous acid, and the ozone entirely decays in a few seconds to minutes. If oxygen-fed ozone is used, the water will be higher in dissolved oxygen and fishes' gill structures will atrophy, making them dependent on oxygen-enriched water. Aquaculture Ozonation – a process of infusing water with ozone – can be used in aquaculture to facilitate organic breakdown. Ozone is also added to recirculating systems to reduce nitrite levels through conversion into nitrate. If nitrite levels in the water are high, nitrites will also accumulate in the blood and tissues of fish, where it interferes with oxygen transport (it causes oxidation of the heme-group of haemoglobin from ferrous () to ferric (), making haemoglobin unable to bind ). Despite these apparent positive effects, ozone use in recirculation systems has been linked to reducing the level of bioavailable iodine in salt water systems, resulting in iodine deficiency symptoms such as goitre and decreased growth in Senegalese sole (Solea senegalensis) larvae. Ozonate seawater is used for surface disinfection of haddock and Atlantic halibut eggs against nodavirus. Nodavirus is a lethal and vertically transmitted virus which causes severe mortality in fish. Haddock eggs should not be treated with high ozone level as eggs so treated did not hatch and died after 3–4 days. Agriculture Ozone application on freshly cut pineapple and banana shows increase in flavonoids and total phenol contents when exposure is up to 20 minutes. Decrease in ascorbic acid (one form of vitamin C) content is observed but the positive effect on total phenol content and flavonoids can overcome the negative effect. Tomatoes upon treatment with ozone show an increase in β-carotene, lutein and lycopene. However, ozone application on strawberries in pre-harvest period shows decrease in ascorbic acid content. Ozone facilitates the extraction of some heavy metals from soil using EDTA. EDTA forms strong, water-soluble coordination compounds with some heavy metals (Pb, Zn) thereby making it possible to dissolve them out from contaminated soil. If contaminated soil is pre-treated with ozone, the extraction efficacy of Pb, Am and Pu increases by 11.0–28.9%, 43.5% and 50.7% respectively. Effect on pollinators Crop pollination is an essential part of an ecosystem. Ozone can have detrimental effects on plant-pollinator interactions. Pollinators carry pollen from one plant to another. This is an essential cycle inside of an ecosystem. Causing changes in certain atmospheric conditions around pollination sites or with xenobiotics could cause unknown changes to the natural cycles of pollinators and flowering plants. In a study conducted in North-Western Europe, crop pollinators were negatively affected more when ozone levels were higher. Alternative medicine The use of ozone for the treatment of medical conditions is not supported by high quality evidence, and is generally considered alternative medicine.
Physical sciences
Chemical elements_2
null
22719
https://en.wikipedia.org/wiki/Orchid
Orchid
Orchids are plants that belong to the family Orchidaceae (), a diverse and widespread group of flowering plants with blooms that are often colourful and fragrant. Orchids are cosmopolitan plants that are found in almost every habitat on Earth except glaciers. The world's richest diversity of orchid genera and species is found in the tropics. Orchidaceae is one of the two largest families of flowering plants, along with the Asteraceae. It contains about 28,000 currently accepted species distributed across 763 genera. The Orchidaceae family encompasses about 6–11% of all species of seed plants. The largest genera are Bulbophyllum (2,000 species), Epidendrum (1,500 species), Dendrobium (1,400 species) and Pleurothallis (1,000 species). It also includes Vanilla (the genus of the vanilla plant), the type genus Orchis, and many commonly cultivated plants such as Phalaenopsis and Cattleya. Moreover, since the introduction of tropical species into cultivation in the 19th century, horticulturists have produced many hybrids and cultivars. Description Orchids are easily distinguished from other plants, as they share some very evident derived characteristics or synapomorphies. Among these are: bilateral symmetry of the flower (zygomorphism), many resupinate flowers, a nearly always highly modified petal (labellum), fused stamens and carpels, and extremely small seeds. Stem and roots All orchids are perennial herbs that lack any permanent woody structure. They can grow according to two patterns: Monopodial: The stem grows from a single bud, leaves are added from the apex each year, and the stem grows longer accordingly. The stem of orchids with a monopodial growth can reach several metres in length, as in Vanda and Vanilla. Sympodial: Sympodial orchids have a front (the newest growth) and a back (the oldest growth). The plant produces a series of adjacent shoots, which grow to a certain size, bloom and then stop growing and are replaced. Sympodial orchids grow horizontally, rather than vertically, following the surface of their support. The growth continues by development of new leads, with their own leaves and roots, sprouting from or next to those of the previous year, as in Cattleya. While a new lead is developing, the rhizome may start its growth again from a so-called 'eye', an undeveloped bud, thereby branching. Sympodial orchids may have visible pseudobulbs joined by a rhizome, which creeps along the top or just beneath the soil. Terrestrial orchids may be rhizomatous or form corms or tubers. The root caps of terrestrial orchids are smooth and white. Some sympodial terrestrial orchids, such as Orchis and Ophrys, have two subterranean tuberous roots. One is used as a food reserve for wintry periods, and provides for the development of the other one, from which visible growth develops. In warm and constantly humid climates, many terrestrial orchids do not need pseudobulbs. Epiphytic orchids, those that grow upon a support, have modified aerial roots that can sometimes be a few meters long. In the older parts of the roots, a modified spongy epidermis, called a velamen, has the function of absorbing humidity. It is made of dead cells and can have a silvery-grey, white or brown appearance. In some orchids, the velamen includes spongy and fibrous bodies near the passage cells, called tilosomes. The cells of the root epidermis grow at a right angle to the axis of the root to allow them to get a firm grasp on their support. Nutrients for epiphytic orchids mainly come from mineral dust, organic detritus, animal droppings and other substances collecting among on their supporting surfaces. The base of the stem of sympodial epiphytes, or in some species essentially the entire stem, may be thickened to form a pseudobulb that contains nutrients and water for drier periods. The pseudobulb typically has a smooth surface with lengthwise grooves, and can have different shapes, often conical or oblong. Its size is very variable; in some small species of Bulbophyllum, it is no longer than two millimeters, while in the largest orchid in the world, Grammatophyllum speciosum (giant orchid), it can reach three meters. Some Dendrobium species have long, canelike pseudobulbs with short, rounded leaves over the whole length; some other orchids have hidden or extremely small pseudobulbs, completely included inside the leaves. With ageing the pseudobulb sheds its leaves and becomes dormant. At this stage it is often called a backbulb. Backbulbs still hold nutrition for the plant, but then a pseudobulb usually takes over, exploiting the last reserves accumulated in the backbulb, which eventually dies off, too. A pseudobulb typically lives for about five years. Orchids without noticeable pseudobulbs are also said to have growths, an individual component of a sympodial plant. Leaves Like most monocots, orchids generally have simple leaves with parallel veins, although some Vanilloideae have reticulate venation. Leaves may be ovate, lanceolate, or orbiculate, and very variable in size on the individual plant. Their characteristics are often diagnostic. They are normally alternate on the stem, often folded lengthwise along the centre ("plicate"), and have no stipules. Orchid leaves often have siliceous bodies called stegmata in the vascular bundle sheaths (not present in the Orchidoideae) and are fibrous. The structure of the leaves corresponds to the specific habitat of the plant. Species that typically bask in sunlight, or grow on sites which can be occasionally very dry, have thick, leathery leaves and the laminae are covered by a waxy cuticle to retain their necessary water supply. Shade-loving species, on the other hand, have long, thin leaves. The leaves of most orchids are perennial, that is, they live for several years, while others, especially those with plicate leaves as in Catasetum, shed them annually and develop new leaves together with new pseudobulbs. The leaves of some orchids are considered ornamental. The leaves of Macodes sanderiana, a semiterrestrial or rock-hugging ("lithophyte") orchid, show a sparkling silver and gold veining on a light green background. The cordate leaves of Psychopsiella limminghei are light brownish-green with maroon-puce markings, created by flower pigments. The attractive mottle of the leaves of lady's slippers from tropical and subtropical Asia (Paphiopedilum), is caused by uneven distribution of chlorophyll. Also, Phalaenopsis schilleriana is a pastel pink orchid with leaves spotted dark green and light green. The jewel orchid (Ludisia discolor) is grown more for its colorful leaves than its white flowers. Some orchids, such as Dendrophylax lindenii (ghost orchid), Aphyllorchis and Taeniophyllum depend on their green roots for photosynthesis and lack normally developed leaves, as do all of the heterotrophic species. Orchids of the genus Corallorhiza (coralroot orchids) lack leaves altogether and instead have symbiotic or parasitic associations with fungal mycelium, though which they absorb sugars. Flowers Orchid flowers have three sepals, three petals and a three-chambered ovary. The three sepals and two of the petals are often similar to each other but one petal is usually highly modified, forming a "lip" or labellum. In most orchid genera, as the flower develops, it undergoes a twisting through 180°, called resupination, so that the labellum lies below the column. The labellum functions to attract insects, and in resupinate flowers, also acts as a landing stage, or sometimes a trap. The reproductive parts of an orchid flower are unique in that the stamens and style are joined to form a single structure, the column. Instead of being released singly, thousands of pollen grains are contained in one or two bundles called pollinia that are attached to a sticky disc near the top of the column. Just below the pollinia is a second, larger sticky plate called the stigma. Reproduction Pollination The complex mechanisms that orchids have evolved to achieve cross-pollination were investigated by Charles Darwin and described in Fertilisation of Orchids (1862). Orchids have developed highly specialized pollination systems, thus the chances of being pollinated are often scarce, so orchid flowers usually remain receptive for very long periods, rendering unpollinated flowers long-lasting in cultivation. Most orchids deliver pollen in a single mass. Each time pollination succeeds, thousands of ovules can be fertilized. Pollinators are often visually attracted by the shape and colours of the labellum. However, some Bulbophyllum species attract male fruit flies (Bactrocera and Zeugodacus spp.) solely via a floral chemical which simultaneously acts as a floral reward (e.g. methyl eugenol, raspberry ketone, or zingerone) to perform pollination. The flowers may produce attractive odours. Although absent in most species, nectar may be produced in a spur of the labellum (8 in the illustration above), or on the point of the sepals, or in the septa of the ovary, the most typical position amongst the Asparagales. In orchids that produce pollinia, pollination happens as some variant of the following sequence: when the pollinator enters into the flower, it touches a viscidium, which promptly sticks to its body, generally on the head or abdomen. While leaving the flower, it pulls the pollinium out of the anther, as it is connected to the viscidium by the caudicle or stipe. The caudicle then bends and the pollinium is moved forwards and downwards. When the pollinator enters another flower of the same species, the pollinium has taken such position that it will stick to the stigma of the second flower, just below the rostellum, pollinating it. In horticulture, artificial orchid pollination is achieved by removing the pollinia with a small instrument such as a toothpick from the pollen parent and transferring them to the seed parent. Some orchids mainly or totally rely on self-pollination, especially in colder regions where pollinators are particularly rare. The caudicles may dry up if the flower has not been visited by any pollinator, and the pollinia then fall directly on the stigma. Otherwise, the anther may rotate and then enter the stigma cavity of the flower (as in Holcoglossum amesianum). The slipper orchid Paphiopedilum parishii reproduces by self-fertilization. This occurs when the anther changes from a solid to a liquid state and directly contacts the stigma surface without the aid of any pollinating agent or floral assembly. The labellum of the Cypripedioideae is poke bonnet-shaped, and has the function of trapping visiting insects. The only exit leads to the anthers that deposit pollen on the visitor. In some extremely specialized orchids, such as the Eurasian genus Ophrys, the labellum is adapted to have a colour, shape, and odour which attracts male insects via mimicry of a receptive female. Pollination happens as the insect attempts to mate with flowers. Many neotropical orchids are pollinated by male orchid bees, which visit the flowers to gather volatile chemicals they require to synthesize pheromonal attractants. Males of such species as Euglossa imperialis or Eulaema meriana have been observed to leave their territories periodically to forage for aromatic compounds, such as cineole, to synthesize pheromone for attracting and mating with females. Each type of orchid places the pollinia on a different body part of a different species of bee, so as to enforce proper cross-pollination. A rare achlorophyllous saprophytic orchid growing entirely underground in Australia, Rhizanthella slateri, is never exposed to light, and depends on ants and other terrestrial insects to pollinate it. Catasetum, a genus discussed briefly by Darwin, actually launches its viscid pollinia with explosive force when an insect touches a seta, knocking the pollinator off the flower. After pollination, the sepals and petals fade and wilt, but they usually remain attached to the ovary. In 2011, Bulbophyllum nocturnum was discovered to flower nocturnally. Asexual reproduction Some species, such as in the genera Phalaenopsis, Dendrobium, and Vanda, produce offshoots or plantlets formed from one of the nodes along the stem, through the accumulation of growth hormones at that point. These shoots are known as keiki. Epipogium aphyllum exhibits a dual reproductive strategy, engaging in both sexual and asexual seed production. The likelihood of apomixis playing a substantial role in successful reproduction appears minimal. Within certain petite orchid species groups, there is a noteworthy preparation of female gametes for fertilization preceding the act of pollination. Fruits and seeds The ovary typically develops into a capsule that is dehiscent by three or six longitudinal slits, while remaining closed at both ends. The seeds are generally almost microscopic and very numerous, in some species over a million per capsule. After ripening, they blow off like dust particles or spores. Most orchid species lack endosperm in their seed and must enter symbiotic relationships with various mycorrhizal basidiomyceteous fungi that provide them the necessary nutrients to germinate, so almost all orchid species are mycoheterotrophic during germination and reliant upon fungi to complete their lifecycles. Only a handful of orchid species have seed that can germinate without mycorrhiza, namely the species within the genus Disa with hydrochorous seeds. As the chance for a seed to meet a suitable fungus is very small, only a minute fraction of all the seeds released grow into adult plants. In cultivation, germination typically takes weeks. Horticultural techniques have been devised for germinating orchid seeds on an artificial nutrient medium, eliminating the requirement of the fungus for germination and greatly aiding the propagation of ornamental orchids. The usual medium for the sowing of orchids in artificial conditions is agar gel combined with a carbohydrate energy source. The carbohydrate source can be combinations of discrete sugars or can be derived from other sources such as banana, pineapple, peach, or even tomato puree or coconut water. After the preparation of the agar medium, it is poured into test tubes or jars which are then autoclaved (or cooked in a pressure cooker) to sterilize the medium. After cooking, the medium begins to gel as it cools. Taxonomy The taxonomy of this family is in constant flux, as new studies continue to clarify the relationships between species and groups of species, allowing more taxa at several ranks to be recognized. The Orchidaceae is currently placed in the order Asparagales by the APG III system of 2009. Five subfamilies are recognised. The cladogram below was made according to the APG system of 1998. It represents the view that most botanists had held up to that time. It was supported by morphological studies, but never received strong support in molecular phylogenetic studies. In 2015, a phylogenetic study showed strong statistical support for the following topology of the orchid tree, using 9 kb of plastid and nuclear DNA from 7 genes, a topology that was confirmed by a phylogenomic study in the same year. Evolution A study in the scientific journal Nature has hypothesised that the origin of the orchids goes back much longer than originally expected. An extinct species of stingless bee, Proplebeia dominicana, was found trapped in Miocene amber from about 15–20 million years ago. The bee was carrying pollen of a previously unknown orchid taxon, Meliorchis caribea, on its wings. This find is the first evidence of fossilised orchids to date and shows insects were active pollinators of orchids then. This extinct orchid, M. caribea, has been placed within the extant tribe Cranichideae, subtribe Goodyerinae (subfamily Orchidoideae). An even older orchid species, Succinanthera baltica, was described from the Eocene Baltic amber by Poinar & Rasmussen (2017). Genetic sequencing indicates orchids may have arisen earlier, 76 to 84 million years ago during the Late Cretaceous. According to Mark W. Chase et al. (2001), the overall biogeography and phylogenetic patterns of Orchidaceae show they are even older and may go back roughly 100 million years. Using the molecular clock method, it was possible to determine the age of the major branches of the orchid family. This also confirmed that the subfamily Vanilloideae is a branch at the basal dichotomy of the monandrous orchids, and must have evolved very early in the evolution of the family. Since this subfamily occurs worldwide in tropical and subtropical regions, from tropical America to tropical Asia, New Guinea and West Africa, and the continents began to split about 100 million years ago, significant biotic exchange must have occurred after this split (since the age of Vanilla is estimated at 60 to 70 million years). Recent biogeographic studies conducted on densely sampled phylogenies indicated that the most recent common ancestor of all extant orchids probably originated somewhere 83 million years ago in the supercontinent Laurasia. Despite their long evolutionary history on Earth, the extant orchid diversity is also inferred to have originated during the last 5 million years, with the American and Asian tropics as the geopgraphic areas exhibiting the highest speciation rates (i.e., number of speciation events per million years) on Earth. Genome duplication occurred prior to the divergence of this taxon. Genera There are around 800 genera of orchids. The following are amongst the most notable genera of the orchid family: Aa Abdominea Acampe Acanthophippium Aceratorchis Acianthus Acineta Acrorchis Ada Aerangis Aeranthes Aerides Aganisia Agrostophyllum Anacamptis Ancistrochilus Angraecum Anguloa Ansellia Aorchis Aplectrum Arachnis Arethusa Armodorum Ascoglossum Australorchis Auxopus Barkeria Bartholina Beloglottis Biermannia Bletilla Brassavola Brassia Bulbophyllum Calanthe Calypso Catasetum Cattleya Chiloschista Cirrhopetalum Cleisostoma Clowesia Coelogyne Coryanthes Cycnoches Cymbidium Cyrtopodium Cypripedium Dactylorhiza Dendrobium Disa Dracula Encyclia Epidendrum Epipactis Eria Eulophia Gastrochilus Gongora Goodyera Grammatophyllum Gymnadenia Habenaria Herschelia Ionopsis Laelia Lepanthes Liparis Ludisia Lycaste Masdevallia Maxillaria Meliorchis Mexipedium Miltonia Mormodes Odontoglossum Oeceoclades Oncidium Ophrys Orchis Paphiopedilum Papilionanthe Paraphalaenopsis Peristeria Phaius Phalaenopsis Pholidota Phragmipedium Platanthera Platystele Pleione Pleurothallis Pomatocalpa Promenaea Pterostylis Renanthera Restrepia Restrepiella Rhynchostylis Roezliella Saccolabium Sarcochilus Satyrium Seidenfadenia Selenipedium Serapias Sobralia Spiranthes Stanhopea Stelis Thrixspermum Tolumnia Trias Trichocentrum Trichoglottis Vanda Vanilla Yoania Zeuxine Zygopetalum Etymology The type genus (i.e. the genus after which the family is named) is Orchis. The genus name comes from the Ancient Greek (), literally meaning "testicle", because of the shape of the twin tubers in some species of Orchis. The term "orchid" was introduced in 1845 by John Lindley in School Botany, as a shortened form of Orchidaceae. In Middle English, the name bollockwort was used for some orchids, based on "bollock" meaning testicle and "wort" meaning plant. Hybrids Orchid species hybridize readily in cultivation, leading to a large number of hybrids with complex naming. Hybridization is possible across genera, and therefore many cultivated orchids are placed into nothogenera. For instance, the nothogenus × Brassocattleya is used for all hybrids of species from the genera Brassavola and Cattleya. Nothogenera based on at least three genera may have names based on a person's name with the suffix -ara, for instance × Colmanara = Miltonia × Odontoglossum × Oncidium. (The suffix is obligatory starting at four genera.) Cultivated hybrids in the orchid family are also special in that they are named by using grex nomenclature, rather than nothospecies. For instance, hybrids between Brassavola nodosa and Brassavola acaulis are placed in the grex Brassavola Guiseppi. The name of the grex ("Guiseppi" in this example) is written in a non-italic font without quotes. Abbreviations As a unique feature of the orchid family, a system of abbreviations exists that applies to names of genera and nothogenera. The system is maintained by the Royal Horticultural Society. These abbreviations consist of at least one character, but may be longer. As opposed to the usual one-letter abbreviations used for names of genera, orchid abbreviations uniquely determine the (notho)genus. They are widely used in cultivation. Examples are Phal for Phalaenopsis, V for Vanda and Cleis for Cleisostoma. Distribution Orchidaceae are cosmopolitan, occurring in almost every habitat apart from glaciers. The world's richest diversity of orchid genera and species is found in the tropics, but they are also found above the Arctic Circle, in southern Patagonia, and two species of Nematoceras on Macquarie Island at 54° south. The following list gives a rough overview of their distribution: Oceania: 50 to 70 genera North America: 20 to 26 genera tropical America: 212 to 250 genera tropical Asia: 260 to 300 genera tropical Africa: 230 to 270 genera Europe and temperate Asia: 40 to 60 genera Ecology A majority of orchids are perennial epiphytes, which grow anchored to trees or shrubs in the tropics and subtropics. Species such as Angraecum sororium are lithophytes, growing on rocks or very rocky soil. Other orchids (including the majority of temperate Orchidaceae) are terrestrial and can be found in habitat areas such as grasslands or forest. Some orchids, such as Neottia and Corallorhiza, lack chlorophyll, so are unable to photosynthesise. Instead, these species obtain energy and nutrients by parasitising soil fungi through the formation of orchid mycorrhizae. The fungi involved include those that form ectomycorrhizas with trees and other woody plants, parasites such as Armillaria, and saprotrophs. These orchids are known as myco-heterotrophs, but were formerly (incorrectly) described as saprophytes as it was believed they gained their nutrition by breaking down organic matter. While only a few species are achlorophyllous holoparasites, all orchids are myco-heterotrophic during germination and seedling growth, and even photosynthetic adult plants may continue to obtain carbon from their mycorrhizal fungi. The symbiosis is typically maintained throughout the lifetime of the orchid because they depend on the fungus for nutrients, sugars and minerals. Uses Perfumery The scent of orchids is frequently analysed by perfumers (using headspace technology and gas-liquid chromatography/mass spectrometry) to identify potential fragrance chemicals. Horticulture The other important use of orchids is their cultivation for the enjoyment of the flowers. Most cultivated orchids are tropical or subtropical, but quite a few that grow in colder climates can be found on the market. Temperate species available at nurseries include Ophrys apifera (bee orchid), Gymnadenia conopsea (fragrant orchid), Anacamptis pyramidalis (pyramidal orchid) and Dactylorhiza fuchsii (common spotted orchid). Orchids of all types have also often been sought by collectors of both species and hybrids. Many hundreds of societies and clubs worldwide have been established. These can be small, local clubs, or larger, national organisations such as the American Orchid Society. Both serve to encourage cultivation and collection of orchids, but some go further by concentrating on conservation or research. The term "botanical orchid" loosely denotes those small-flowered, tropical orchids belonging to several genera that do not fit into the "florist" orchid category. A few of these genera contain enormous numbers of species. Some, such as Pleurothallis and Bulbophyllum, contain approximately 1700 and 2000 species, respectively, and are often extremely vegetatively diverse. The primary use of the term is among orchid hobbyists wishing to describe unusual species they grow, though it is also used to distinguish naturally occurring orchid species from horticulturally created hybrids. New orchids are registered with the International Orchid Register, maintained by the Royal Horticultural Society. Several thousand new grexes are registered each year. Food The dried seed pods of one orchid genus, Vanilla (especially Vanilla planifolia), are commercially important as a flavouring in baking, for perfume manufacture and aromatherapy. The underground tubers of terrestrial orchids [mainly Orchis mascula (early purple orchid)] are ground to a powder and used for cooking, such as in the hot beverage salep or in the Turkish mastic ice cream dondurma. The name salep has been claimed to come from the Arabic expression , "fox testicles", but it appears more likely the name comes directly from the Arabic name . The similarity in appearance to testes naturally accounts for salep being considered an aphrodisiac. The dried leaves of Jumellea fragrans are used to flavour rum on Reunion Island. Some saprophytic orchid species of the group Gastrodia produce potato-like tubers and were consumed as food by native peoples in Australia and can be successfully cultivated, notably Gastrodia sesamoides. Wild stands of these plants can still be found in the same areas as early Aboriginal settlements, such as Ku-ring-gai Chase National Park in Australia. Aboriginal peoples located the plants in habitat by observing where bandicoots had scratched in search of the tubers after detecting the plants underground by scent. Cultural symbolism Orchids have many associations with symbolic values. For example, the orchid is the City Flower of Shaoxing, China. Cattleya mossiae is the national Venezuelan flower, while Cattleya trianae is the national flower of Colombia. Vanda Miss Joaquim is the national flower of Singapore, Guarianthe skinneri is the national flower of Costa Rica and Rhyncholaelia digbyana is the national flower of Honduras. Prosthechea cochleata is the national flower of Belize, where it is known as the black orchid. Lycaste skinneri has a white variety (alba) that is the national flower of Guatemala, commonly known as Monja Blanca (White Nun). Panama's national flower is the Holy Ghost orchid (Peristeria elata), or 'the flor del Espiritu Santo'. Rhynchostylis retusa is the state flower of the Indian state of Assam where it is known as Kopou Phul. Orchids native to the Mediterranean are depicted on the Ara Pacis in Rome, until now the only known instance of orchids in ancient art, and the earliest in European art. A French writer and agronomist, Louis Liger, invented a classical myth in his book Le Jardinier Fleuriste et Historiographe published in 1704, attributing it to the ancient Greeks and Romans, in which Orchis the son of a nymph and a satyr rapes a priestess of Bacchus during one of his festivals the Bacchanalia and is then killed and transformed into an orchid flower as punishment by the gods, paralleling the various myths of youths dying and becoming flowers, like Adonis and Narcissus; this myth however does not appear any earlier than Liger, and is not part of traditional Greek and Roman mythologies. Conservation Almost all orchids are included in Appendix II of the Convention on International Trade in Endangered Species (CITES), meaning that international trade (including in their parts/derivatives) is regulated by the CITES permit system. A smaller number of orchids such as Paphiopedilum sp. are listed in CITES Appendix I meaning that commercial international trade in wild-sourced specimens is prohibited and all other trade is strictly controlled. Assisted migration as conservation tool In 2006 the Longtan Dam was constructed at the Hongshui River, near the Yachang Orchid Nature Reserve. In response to threats of inundation of wild orchids at lower altitudes (350–400 m above sea level), 1000 endangered orchid plants of 16 genera and 29 species were translocated to higher elevation (approximately 1000 m above sea level). After relocation the 5 year survival of low and wide elevation species did not significantly differ and the mortality due to transplant shock was at only 10%. From this it was concluded that assisted migration might be a viable conservation tool for orchid species endangered by climate change. Toxicity Plants in the genus Phalaenopsis are not toxic to pets, according to the American Society for the Prevention of Cruelty to Animals.
Biology and health sciences
Monocots
null
22721
https://en.wikipedia.org/wiki/Obsidian
Obsidian
Obsidian ( ) is a naturally occurring volcanic glass formed when lava extruded from a volcano cools rapidly with minimal crystal growth. It is an igneous rock. Produced from felsic lava, obsidian is rich in the lighter elements such as silicon, oxygen, aluminium, sodium, and potassium. It is commonly found within the margins of rhyolitic lava flows known as obsidian flows. These flows have a high content of silica, giving them a high viscosity. The high viscosity inhibits diffusion of atoms through the lava, which inhibits the first step (nucleation) in the formation of mineral crystals. Together with rapid cooling, this results in a natural glass forming from the lava. Obsidian is hard, brittle, and amorphous; it therefore fractures with sharp edges. In the past, it was used to manufacture cutting and piercing tools, and it has been used experimentally as surgical scalpel blades. Origin and properties The Natural History by the Roman writer Pliny the Elder includes a few sentences about a volcanic glass called obsidian (lapis obsidianus), discovered in Ethiopia by Obsidius, a Roman explorer. Obsidian is formed from quickly cooled lava, which is the parent material. Extrusive formation of obsidian may occur when felsic lava cools rapidly at the edges of a felsic lava flow or volcanic dome, or when lava cools during sudden contact with water or air. Intrusive formation of obsidian may occur when felsic lava cools along the edges of a dike. Tektites were once thought by many to be obsidian produced by lunar volcanic eruptions, though few scientists now adhere to this hypothesis. Obsidian is mineral-like, but not a true mineral because, as a glass, it is not crystalline; in addition, its composition is too variable to be classified as a mineral. It is sometimes classified as a mineraloid. Though obsidian is usually dark in color, similar to mafic rocks such as basalt, the composition of obsidian is extremely felsic. Obsidian consists mainly of SiO2 (silicon dioxide), usually 70% by weight or more; the remainder consists of variable amounts of other oxides, mostly oxides of aluminum, iron, potassium, sodium and calcium. Crystalline rocks with a similar composition include granite and rhyolite. Because obsidian is metastable at the Earth's surface (over time the glass devitrifies, becoming fine-grained mineral crystals), obsidian older than Miocene in age is rare. Exceptionally old obsidians include a Cretaceous welded tuff and a partially devitrified Ordovician perlite. This transformation of obsidian is accelerated by the presence of water. Although newly formed obsidian has a low water content, typically less than 1% water by weight, it becomes progressively hydrated when exposed to groundwater, forming perlite. Pure obsidian is usually dark in appearance, though the color varies depending on the impurities present. Iron and other transition elements may give the obsidian a dark brown to black color. Most black obsidians contain nanoinclusions of magnetite, an iron oxide. Very few samples of obsidian are nearly colorless. In some stones, the inclusion of small, white, radially clustered crystals (spherulites) of the mineral cristobalite in the black glass produce a blotchy or snowflake pattern (snowflake obsidian). Obsidian may contain patterns of gas bubbles remaining from the lava flow, aligned along layers created as the molten rock was flowing before being cooled. These bubbles can produce interesting effects such as a golden sheen (sheen obsidian). An iridescent, rainbow-like sheen (fire obsidian) is caused by inclusions of magnetite nanoparticles creating thin-film interference. Colorful, striped obsidian (rainbow obsidian) from Mexico contains oriented nanorods of hedenbergite, which cause the rainbow striping effects by thin-film interference. Occurrence Obsidian is found near volcanoes in locations which have undergone rhyolitic eruptions. It can be found in Argentina, Armenia, Azerbaijan, Australia, Canada, Chile, Georgia, Ecuador, El Salvador, Greece, Guatemala, Hungary, Iceland, Indonesia, Italy, Japan, Kenya, Mexico, New Zealand, Papua New Guinea, Peru, Russia, Scotland, the Canary Islands, Turkey and the United States. Obsidian flows which are so large that they can be hiked on are found within the calderas of Newberry Volcano (Big Obsidian Flow, 700 acres) and Medicine Lake Volcano in the Cascade Range of western North America, and at Inyo Craters east of the Sierra Nevada in California. Yellowstone National Park has a mountainside containing obsidian located between Mammoth Hot Springs and the Norris Geyser Basin, and deposits can be found in many other western U.S. states including Arizona, Colorado, New Mexico, Texas, Utah, and Washington, Oregon and Idaho. There are only four major deposit areas in the central Mediterranean: Lipari, Pantelleria, Palmarola and Monte Arci (Sardinia). Ancient sources in the Aegean were Milos and Gyali. Acıgöl town and the Göllü Dağ volcano were the most important sources in central Anatolia, one of the more important source areas in the prehistoric Near East. Prehistoric and historical use The first known archaeological evidence of usage was in Kariandusi (Kenya) and other sites of the Acheulian age (beginning 1.5 million years BP) dated 700,000 BC, although only very few objects have been found at these sites relative to the Neolithic. Manufacture of obsidian bladelets at Lipari had reached a high level of sophistication by the late Neolithic, and was traded as far as Sicily, the southern Po river valley, and Croatia. Obsidian bladelets were used in ritual circumcisions and cutting of umbilical cords of newborns. Anatolian sources of obsidian are known to have been the material used in the Levant and modern-day Iraqi Kurdistan from a time beginning sometime about 12,500 BC. Obsidian artifacts are common at Tell Brak, one of the earliest Mesopotamian urban centers, dating to the late fifth millennium BC. Obsidian was valued in Stone Age cultures because, like flint, it could be fractured to produce sharp blades or arrowheads in a process called knapping. Like all glass and some other naturally occurring rocks, obsidian breaks with a characteristic conchoidal fracture. It was also polished to create early mirrors. Modern archaeologists have developed a relative dating system, obsidian hydration dating, to calculate the age of obsidian artifacts. Europe Obsidian artifacts first appeared in the European continent in Central Europe in the Middle Paleolithic and had become common by the Upper Paleolithic, although there are exceptions to this. Obsidian played an important role in the transmission of Neolithic knowledge and experiences. The material was mainly used for production of chipped tools which were very sharp due to its nature. Artifacts made of obsidian can be found in many Neolithic cultures across Europe. The source of obsidian for cultures inhabiting the territory of and around Greece was the island of Milos; the Starčevo–Körös–Criș culture obtained obsidian from sources in Hungary and Slovakia, while the Cardium-Impresso cultural complex acquired obsidian from the island outcrops of the central Mediterranean. Through trade, these artifacts ended up in lands thousands of kilometers away from the original source; this indicates that they were a highly valued commodity. John Dee had a mirror, made of obsidian, which was brought from Mexico to Europe between 1527 and 1530 after Hernando Cortés's conquest of the region. Middle East and Asia In the Ubaid in the 5th millennium BC, blades were manufactured from obsidian extracted from outcrops located in modern-day Turkey. Ancient Egyptians used obsidian imported from the eastern Mediterranean and southern Red Sea regions. Obsidian scalpels older than 2100 BC have been found in a Bronze Age settlement in Turkey. In the eastern Mediterranean area the material was used to make tools, mirrors and decorative objects. The use of obsidian tools was present in Japan near areas of volcanic activity. Obsidian was mined during the Jōmon period. Obsidian has also been found in Gilat, a site in the western Negev in Israel. Eight obsidian artifacts dating to the Chalcolithic Age found at this site were traced to obsidian sources in Anatolia. Neutron activation analysis (NAA) on the obsidian found at this site helped to reveal trade routes and exchange networks previously unknown. Americas Lithic analysis helps to understand pre-Hispanic groups in Mesoamerica. A careful analysis of obsidian in a culture or place can be of considerable use to reconstruct commerce, production, and distribution, and thereby understand economic, social and political aspects of a civilization. This is the case in Yaxchilán, a Maya city where even warfare implications have been studied linked with obsidian use and its debris. Another example is the archeological recovery at coastal Chumash sites in California, indicating considerable trade with the distant site of Casa Diablo Hot Springs in the Sierra Nevada. Pre-Columbian Mesoamericans' use of obsidian was extensive and sophisticated; including carved and worked obsidian for tools and decorative objects. Mesoamericans also made a type of sword with obsidian blades mounted in a wooden body. Called a macuahuitl, the weapon could inflict terrible injuries, combining the sharp cutting edge of an obsidian blade with the ragged cut of a serrated weapon. The polearm version of this weapon was called tepoztopilli. Obsidian mirrors were used by some Aztec priests to conjure visions and make prophecies. They were connected with Tezcatlipoca, god of obsidian and sorcery, whose name can be translated from the Nahuatl language as 'Smoking Mirror'. Indigenous people traded obsidian throughout the Americas. Each volcano and in some cases each volcanic eruption produces a distinguishable type of obsidian allowing archaeologists to use methods such as non-destructive energy dispersive X-ray fluorescence to select minor element compositions from both the artifact and geological sample to trace the origins of a particular artifact. Similar tracing techniques have also allowed obsidian in Greece to be identified as coming from Milos, Nisyros or Gyali, islands in the Aegean Sea. Obsidian cores and blades were traded great distances inland from the coast. In Chile obsidian tools from Chaitén Volcano have been found as far away as in Chan-Chan north of the volcano, and also in sites 400 km south of it. Oceania The Lapita culture, active across a large area of the Pacific Ocean around 1000 BC, made widespread use of obsidian tools and engaged in long distance obsidian trading. The complexity of the production technique for these tools, and the care taken in their storage, may indicate that beyond their practical use they were associated with prestige or high status. Obsidian was also used on Rapa Nui (Easter Island) for edged tools such as Mataia and the pupils of the eyes of their Moai (statues), which were encircled by rings of bird bone. Obsidian was used to inscribe the Rongorongo glyphs. Current use Obsidian can be used to make extremely sharp knives, and obsidian blades are a type of glass knife made using naturally occurring obsidian instead of manufactured glass. Obsidian is used by some surgeons for scalpel blades, although this is not approved by the US Food and Drug Administration (FDA) for use on humans. Well-crafted obsidian blades, like any glass knife, can have a cutting edge many times sharper than high-quality steel surgical scalpels: the cutting edge of the blade is only about three nanometers thick. All metal knives have a jagged, irregular blade when viewed under a strong enough microscope; however, obsidian blades are still smooth, even when examined under an electron microscope. One study found that obsidian incisions produced fewer inflammatory cells and less granulation tissue in a group of rats after seven days but the differences disappeared after twenty-one days. Don Crabtree has produced surgical obsidian blades and written articles on the subject. Obsidian scalpels may be purchased for surgical use on research animals. The major disadvantage of obsidian blades is their brittleness compared to those made of metal, thus limiting the surgical applications for obsidian blades to a variety of specialized uses where this is not a concern. Obsidian is also used for ornamental purposes and as a gemstone. It presents a different appearance depending on how it is cut: in one direction it is jet black, while in another it is glistening gray. "Apache tears" are small rounded obsidian nuggets often embedded within a grayish-white perlite matrix. Plinths for audio turntables have been made of obsidian since the 1970s, such as the grayish-black SH-10B3 plinth by Technics.
Physical sciences
Igneous rocks
Earth science
22739
https://en.wikipedia.org/wiki/Obfuscation%20%28software%29
Obfuscation (software)
In software development, obfuscation is the practice of creating source or machine code that is intentionally difficult for humans or computers to understand. Similar to obfuscation in natural language, code obfuscation may involve using unnecessarily roundabout ways to write statements. Programmers often obfuscate code to conceal its purpose, logic, or embedded values. The primary reasons for doing so are to prevent tampering, deter reverse engineering, or to create a puzzle or recreational challenge to deobfuscate the code, a challenge often included in crackmes. While obfuscation can be done manually, it is more commonly performed using obfuscators. Overview The architecture and characteristics of some languages may make them easier to obfuscate than others. C, C++, and the Perl programming language are some examples of languages easy to obfuscate. Haskell is also quite obfuscatable despite being quite different in structure. The properties that make a language obfuscatable are not immediately obvious. Techniques Types of obfuscations include simple keyword substitution, use or non-use of whitespace to create artistic effects, and self-generating or heavily compressed programs. According to Nick Montfort, techniques may include: naming obfuscation, which includes naming variables in a meaningless or deceptive way; data/code/comment confusion, which includes making some actual code look like comments or confusing syntax with data; double coding, which can be displaying code in poetry form or interesting shapes. Automated tools A variety of tools exist to perform or assist with code obfuscation. These include experimental research tools developed by academics, hobbyist tools, commercial products written by professionals, and open-source software. Additionally, deobfuscation tools exist, aiming to reverse the obfuscation process. While most commercial obfuscation solutions transform either program source code or platform-independent bytecode (as used by Java and .NET), some also work directly on compiled binaries. Some Python examples can be found in the official Python programming FAQ and elsewhere. The movfuscator C compiler for the x86_32 ISA uses only the mov instruction in order to obfuscate. Recreational Writing and reading obfuscated source code can be a brain teaser. A number of programming contests reward the most creatively obfuscated code, such as the International Obfuscated C Code Contest and the Obfuscated Perl Contest. Short obfuscated Perl programs may be used in signatures of Perl programmers. These are JAPHs ("Just another Perl hacker"). Cryptographic Cryptographers have explored the idea of obfuscating code so that reverse-engineering the code is cryptographically hard. This is formalized in the many proposals for indistinguishability obfuscation, a cryptographic primitive that, if possible to build securely, would allow one to construct many other kinds of cryptography, including completely novel types that no one knows how to make. (A stronger notion, black-box obfuscation, is known to be impossible in general.) Disadvantages of obfuscation While obfuscation can make reading, writing, and reverse-engineering a program difficult and time-consuming, it will not necessarily make it impossible. It adds time and complexity to the build process for the developers. It can make debugging issues after the software has been obfuscated extremely difficult. Once code is no longer maintained, hobbyists may want to maintain the program, add mods, or understand it better. Obfuscation makes it hard for end users to do useful things with the code. Certain kinds of obfuscation (i.e. code that isn't just a local binary and downloads mini binaries from a web server as needed) can degrade performance and/or require Internet. Notifying users of obfuscated code Some anti-virus softwares, such as AVG AntiVirus, will also alert their users when they land on a website with code that is manually obfuscated, as one of the purposes of obfuscation can be to hide malicious code. However, some developers may employ code obfuscation for the purpose of reducing file size or increasing security. The average user may not expect their antivirus software to provide alerts about an otherwise harmless piece of code, especially from trusted corporations, so such a feature may actually deter users from using legitimate software. Mozilla and Google disallow browser extensions containing obfuscated code in their add-ons store. Obfuscation and copyleft licenses There has been debate on whether it is illegal to skirt copyleft software licenses by releasing source code in obfuscated form, such as in cases in which the author is less willing to make the source code available. The issue is addressed in the GNU General Public License by requiring the "preferred form for making modifications" to be made available. The GNU website states "Obfuscated 'source code' is not real source code and does not count as source code." Decompilers A decompiler is a tool that can reverse-engineer source code from an executable or library. This process is sometimes referred to as a man-in-the-end (mite) attack, inspired by the traditional "man-in-the-middle attack" in cryptography. The decompiled source code is often hard to read, containing random function and variable names, incorrect variable types, and logic that differs from the original source code due to compiler optimizations. Model obfuscation Model obfuscation is a technique to hide the internal structure of a machine learning model. Obfuscation turns a model into a black box. It is contrary to explainable AI. Obfuscation models can also be applied to training data before feeding it into the model to add random noise. This hides sensitive information about the properties of individual and groups of samples.
Technology
Computer security
null
22773
https://en.wikipedia.org/wiki/Oxidative%20phosphorylation
Oxidative phosphorylation
Oxidative phosphorylation (UK , US ) or electron transport-linked phosphorylation or terminal oxidation is the metabolic pathway in which cells use enzymes to oxidize nutrients, thereby releasing chemical energy in order to produce adenosine triphosphate (ATP). In eukaryotes, this takes place inside mitochondria. Almost all aerobic organisms carry out oxidative phosphorylation. This pathway is so pervasive because it releases more energy than alternative fermentation processes such as anaerobic glycolysis. The energy stored in the chemical bonds of glucose is released by the cell in the citric acid cycle, producing carbon dioxide and the energetic electron donors NADH and FADH. Oxidative phosphorylation uses these molecules and O2 to produce ATP, which is used throughout the cell whenever energy is needed. During oxidative phosphorylation, electrons are transferred from the electron donors to a series of electron acceptors in a series of redox reactions ending in oxygen, whose reaction releases half of the total energy. In eukaryotes, these redox reactions are catalyzed by a series of protein complexes within the inner membrane of the cell's mitochondria, whereas, in prokaryotes, these proteins are located in the cell's outer membrane. These linked sets of proteins are called the electron transport chain. In eukaryotes, five main protein complexes are involved, whereas in prokaryotes many different enzymes are present, using a variety of electron donors and acceptors. The energy transferred by electrons flowing through this electron transport chain is used to transport protons across the inner mitochondrial membrane, in a process called electron transport. This generates potential energy in the form of a pH gradient and the resulting electrical potential across this membrane. This store of energy is tapped when protons flow back across the membrane and down the potential energy gradient, through a large enzyme called ATP synthase in a process called chemiosmosis. The ATP synthase uses the energy to transform adenosine diphosphate (ADP) into adenosine triphosphate, in a phosphorylation reaction. The reaction is driven by the proton flow, which forces the rotation of a part of the enzyme. The ATP synthase is a rotary mechanical motor. Although oxidative phosphorylation is a vital part of metabolism, it produces reactive oxygen species such as superoxide and hydrogen peroxide, which lead to propagation of free radicals, damaging cells and contributing to disease and, possibly, aging and senescence. The enzymes carrying out this metabolic pathway are also the target of many drugs and poisons that inhibit their activities. Chemiosmosis Oxidative phosphorylation works by using energy-releasing chemical reactions to drive energy-requiring reactions. The two sets of reactions are said to be coupled. This means one cannot occur without the other. The chain of redox reactions driving the flow of electrons through the electron transport chain, from electron donors such as NADH to electron acceptors such as oxygen and hydrogen (protons), is an exergonic process – it releases energy, whereas the synthesis of ATP is an endergonic process, which requires an input of energy. Both the electron transport chain and the ATP synthase are embedded in a membrane, and energy is transferred from the electron transport chain to the ATP synthase by movements of protons across this membrane, in a process called chemiosmosis. A current of protons is driven from the negative N-side of the membrane to the positive P-side through the proton-pumping enzymes of the electron transport chain. The movement of protons creates an electrochemical gradient across the membrane, is called the proton-motive force. It has two components: a difference in proton concentration (a H+ gradient, ΔpH) and a difference in electric potential, with the N-side having a negative charge. ATP synthase releases this stored energy by completing the circuit and allowing protons to flow down the electrochemical gradient, back to the N-side of the membrane. The electrochemical gradient drives the rotation of part of the enzyme's structure and couples this motion to the synthesis of ATP. The two components of the proton-motive force are thermodynamically equivalent: In mitochondria, the largest part of energy is provided by the potential; in alkaliphile bacteria the electrical energy even has to compensate for a counteracting inverse pH difference. Inversely, chloroplasts operate mainly on ΔpH. However, they also require a small membrane potential for the kinetics of ATP synthesis. In the case of the fusobacterium Propionigenium modestum it drives the counter-rotation of subunits a and c of the FO motor of ATP synthase. The amount of energy released by oxidative phosphorylation is high, compared with the amount produced by anaerobic fermentation. Glycolysis produces only 2 ATP molecules, but somewhere between 30 and 36 ATPs are produced by the oxidative phosphorylation of the 10 NADH and 2 succinate molecules made by converting one molecule of glucose to carbon dioxide and water, while each cycle of beta oxidation of a fatty acid yields about 14 ATPs. These ATP yields are theoretical maximum values; in practice, some protons leak across the membrane, lowering the yield of ATP. Electron and proton transfer molecules The electron transport chain carries both protons and electrons, passing electrons from donors to acceptors, and transporting protons across a membrane. These processes use both soluble and protein-bound transfer molecules. In the mitochondria, electrons are transferred within the intermembrane space by the water-soluble electron transfer protein cytochrome c. This carries only electrons, and these are transferred by the reduction and oxidation of an iron atom that the protein holds within a heme group in its structure. Cytochrome c is also found in some bacteria, where it is located within the periplasmic space. Within the inner mitochondrial membrane, the lipid-soluble electron carrier coenzyme Q10 (Q) carries both electrons and protons by a redox cycle. This small benzoquinone molecule is very hydrophobic, so it diffuses freely within the membrane. When Q accepts two electrons and two protons, it becomes reduced to the ubiquinol form (QH2); when QH2 releases two electrons and two protons, it becomes oxidized back to the ubiquinone (Q) form. As a result, if two enzymes are arranged so that Q is reduced on one side of the membrane and QH2 oxidized on the other, ubiquinone will couple these reactions and shuttle protons across the membrane. Some bacterial electron transport chains use different quinones, such as menaquinone, in addition to ubiquinone. Within proteins, electrons are transferred between flavin cofactors, iron–sulfur clusters and cytochromes. There are several types of iron–sulfur cluster. The simplest kind found in the electron transfer chain consists of two iron atoms joined by two atoms of inorganic sulfur; these are called [2Fe–2S] clusters. The second kind, called [4Fe–4S], contains a cube of four iron atoms and four sulfur atoms. Each iron atom in these clusters is coordinated by an additional amino acid, usually by the sulfur atom of cysteine. Metal ion cofactors undergo redox reactions without binding or releasing protons, so in the electron transport chain they serve solely to transport electrons through proteins. Electrons move quite long distances through proteins by hopping along chains of these cofactors. This occurs by quantum tunnelling, which is rapid over distances of less than 1.4 m. Eukaryotic electron transport chains Many catabolic biochemical processes, such as glycolysis, the citric acid cycle, and beta oxidation, produce the reduced coenzyme NADH. This coenzyme contains electrons that have a high transfer potential; in other words, they will release a large amount of energy upon oxidation. However, the cell does not release this energy all at once, as this would be an uncontrollable reaction. Instead, the electrons are removed from NADH and passed to oxygen through a series of enzymes that each release a small amount of the energy. This set of enzymes, consisting of complexes I through IV, is called the electron transport chain and is found in the inner membrane of the mitochondrion. Succinate is also oxidized by the electron transport chain, but feeds into the pathway at a different point. In eukaryotes, the enzymes in this electron transport system use the energy released from O2 by NADH to pump protons across the inner membrane of the mitochondrion. This causes protons to build up in the intermembrane space, and generates an electrochemical gradient across the membrane. The energy stored in this potential is then used by ATP synthase to produce ATP. Oxidative phosphorylation in the eukaryotic mitochondrion is the best-understood example of this process. The mitochondrion is present in almost all eukaryotes, with the exception of anaerobic protozoa such as Trichomonas vaginalis that instead reduce protons to hydrogen in a remnant mitochondrion called a hydrogenosome. NADH-coenzyme Q oxidoreductase (complex I) NADH-coenzyme Q oxidoreductase, also known as NADH dehydrogenase or complex I, is the first protein in the electron transport chain. Complex I is a giant enzyme with the mammalian complex I having 46 subunits and a molecular mass of about 1,000 kilodaltons (kDa). The structure is known in detail only from a bacterium; in most organisms the complex resembles a boot with a large "ball" poking out from the membrane into the mitochondrion. The genes that encode the individual proteins are contained in both the cell nucleus and the mitochondrial genome, as is the case for many enzymes present in the mitochondrion. The reaction that is catalyzed by this enzyme is the two electron oxidation of NADH by coenzyme Q10 or ubiquinone (represented as Q in the equation below), a lipid-soluble quinone that is found in the mitochondrion membrane: The start of the reaction, and indeed of the entire electron chain, is the binding of a NADH molecule to complex I and the donation of two electrons. The electrons enter complex I via a prosthetic group attached to the complex, flavin mononucleotide (FMN). The addition of electrons to FMN converts it to its reduced form, FMNH2. The electrons are then transferred through a series of iron–sulfur clusters: the second kind of prosthetic group present in the complex. There are both [2Fe–2S] and [4Fe–4S] iron–sulfur clusters in complex I. As the electrons pass through this complex, four protons are pumped from the matrix into the intermembrane space. Exactly how this occurs is unclear, but it seems to involve conformational changes in complex I that cause the protein to bind protons on the N-side of the membrane and release them on the P-side of the membrane. Finally, the electrons are transferred from the chain of iron–sulfur clusters to a ubiquinone molecule in the membrane. Reduction of ubiquinone also contributes to the generation of a proton gradient, as two protons are taken up from the matrix as it is reduced to ubiquinol (QH2). Succinate-Q oxidoreductase (complex II) Succinate-Q oxidoreductase, also known as complex II or succinate dehydrogenase, is a second entry point to the electron transport chain. It is unusual because it is the only enzyme that is part of both the citric acid cycle and the electron transport chain. Complex II consists of four protein subunits and contains a bound flavin adenine dinucleotide (FAD) cofactor, iron–sulfur clusters, and a heme group that does not participate in electron transfer to coenzyme Q, but is believed to be important in decreasing production of reactive oxygen species. It oxidizes succinate to fumarate and reduces ubiquinone. As this reaction releases less energy than the oxidation of NADH, complex II does not transport protons across the membrane and does not contribute to the proton gradient. In some eukaryotes, such as the parasitic worm Ascaris suum, an enzyme similar to complex II, fumarate reductase (menaquinol:fumarate oxidoreductase, or QFR), operates in reverse to oxidize ubiquinol and reduce fumarate. This allows the worm to survive in the anaerobic environment of the large intestine, carrying out anaerobic oxidative phosphorylation with fumarate as the electron acceptor. Another unconventional function of complex II is seen in the malaria parasite Plasmodium falciparum. Here, the reversed action of complex II as an oxidase is important in regenerating ubiquinol, which the parasite uses in an unusual form of pyrimidine biosynthesis. Electron transfer flavoprotein-Q oxidoreductase Electron transfer flavoprotein-ubiquinone oxidoreductase (ETF-Q oxidoreductase), also known as electron transferring-flavoprotein dehydrogenase, is a third entry point to the electron transport chain. It is an enzyme that accepts electrons from electron-transferring flavoprotein in the mitochondrial matrix, and uses these electrons to reduce ubiquinone. This enzyme contains a flavin and a [4Fe–4S] cluster, but, unlike the other respiratory complexes, it attaches to the surface of the membrane and does not cross the lipid bilayer. In mammals, this metabolic pathway is important in beta oxidation of fatty acids and catabolism of amino acids and choline, as it accepts electrons from multiple acetyl-CoA dehydrogenases. In plants, ETF-Q oxidoreductase is also important in the metabolic responses that allow survival in extended periods of darkness. Q-cytochrome c oxidoreductase (complex III) Q-cytochrome c oxidoreductase is also known as cytochrome c reductase, cytochrome bc1 complex, or simply complex III. In mammals, this enzyme is a dimer, with each subunit complex containing 11 protein subunits, an [2Fe-2S] iron–sulfur cluster and three cytochromes: one cytochrome c1 and two b cytochromes. A cytochrome is a kind of electron-transferring protein that contains at least one heme group. The iron atoms inside complex III's heme groups alternate between a reduced ferrous (+2) and oxidized ferric (+3) state as the electrons are transferred through the protein. The reaction catalyzed by complex III is the oxidation of one molecule of ubiquinol and the reduction of two molecules of cytochrome c, a heme protein loosely associated with the mitochondrion. Unlike coenzyme Q, which carries two electrons, cytochrome c carries only one electron. As only one of the electrons can be transferred from the QH2 donor to a cytochrome c acceptor at a time, the reaction mechanism of complex III is more elaborate than those of the other respiratory complexes, and occurs in two steps called the Q cycle. In the first step, the enzyme binds three substrates, first, QH2, which is then oxidized, with one electron being passed to the second substrate, cytochrome c. The two protons released from QH2 pass into the intermembrane space. The third substrate is Q, which accepts the second electron from the QH2 and is reduced to Q.−, which is the ubisemiquinone free radical. The first two substrates are released, but this ubisemiquinone intermediate remains bound. In the second step, a second molecule of QH2 is bound and again passes its first electron to a cytochrome c acceptor. The second electron is passed to the bound ubisemiquinone, reducing it to QH2 as it gains two protons from the mitochondrial matrix. This QH2 is then released from the enzyme. As coenzyme Q is reduced to ubiquinol on the inner side of the membrane and oxidized to ubiquinone on the other, a net transfer of protons across the membrane occurs, adding to the proton gradient. The rather complex two-step mechanism by which this occurs is important, as it increases the efficiency of proton transfer. If, instead of the Q cycle, one molecule of QH2 were used to directly reduce two molecules of cytochrome c, the efficiency would be halved, with only one proton transferred per cytochrome c reduced. Cytochrome c oxidase (complex IV) Cytochrome c oxidase, also known as complex IV, is the final protein complex in the electron transport chain. The mammalian enzyme has an extremely complicated structure and contains 13 subunits, two heme groups, as well as multiple metal ion cofactors – in all, three atoms of copper, one of magnesium and one of zinc. This enzyme mediates the final reaction in the electron transport chain and transfers electrons to oxygen and hydrogen (protons), while pumping protons across the membrane. The final electron acceptor oxygen is reduced to water in this step. Both the direct pumping of protons and the consumption of matrix protons in the reduction of oxygen contribute to the proton gradient. The reaction catalyzed is the oxidation of cytochrome c and the reduction of oxygen: Alternative reductases and oxidases Many eukaryotic organisms have electron transport chains that differ from the much-studied mammalian enzymes described above. For example, plants have alternative NADH oxidases, which oxidize NADH in the cytosol rather than in the mitochondrial matrix, and pass these electrons to the ubiquinone pool. These enzymes do not transport protons, and, therefore, reduce ubiquinone without altering the electrochemical gradient across the inner membrane. Another example of a divergent electron transport chain is the alternative oxidase, which is found in plants, as well as some fungi, protists, and possibly some animals. This enzyme transfers electrons directly from ubiquinol to oxygen. The electron transport pathways produced by these alternative NADH and ubiquinone oxidases have lower ATP yields than the full pathway. The advantages produced by a shortened pathway are not entirely clear. However, the alternative oxidase is produced in response to stresses such as cold, reactive oxygen species, and infection by pathogens, as well as other factors that inhibit the full electron transport chain. Alternative pathways might, therefore, enhance an organism's resistance to injury, by reducing oxidative stress. Organization of complexes The original model for how the respiratory chain complexes are organized was that they diffuse freely and independently in the mitochondrial membrane. However, recent data suggest that the complexes might form higher-order structures called supercomplexes or "respirasomes". In this model, the various complexes exist as organized sets of interacting enzymes. These associations might allow channeling of substrates between the various enzyme complexes, increasing the rate and efficiency of electron transfer. Within such mammalian supercomplexes, some components would be present in higher amounts than others, with some data suggesting a ratio between complexes I/II/III/IV and the ATP synthase of approximately 1:1:3:7:4. However, the debate over this supercomplex hypothesis is not completely resolved, as some data do not appear to fit with this model. Prokaryotic electron transport chains In contrast to the general similarity in structure and function of the electron transport chains in eukaryotes, bacteria and archaea possess a large variety of electron-transfer enzymes. These use an equally wide set of chemicals as substrates. In common with eukaryotes, prokaryotic electron transport uses the energy released from the oxidation of a substrate to pump ions across a membrane and generate an electrochemical gradient. In the bacteria, oxidative phosphorylation in Escherichia coli is understood in most detail, while archaeal systems are at present poorly understood. The main difference between eukaryotic and prokaryotic oxidative phosphorylation is that bacteria and archaea use many different substances to donate or accept electrons. This allows prokaryotes to grow under a wide variety of environmental conditions. In E. coli, for example, oxidative phosphorylation can be driven by a large number of pairs of reducing agents and oxidizing agents, which are listed below. The midpoint potential of a chemical measures how much energy is released when it is oxidized or reduced, with reducing agents having negative potentials and oxidizing agents positive potentials. As shown above, E. coli can grow with reducing agents such as formate, hydrogen, or lactate as electron donors, and nitrate, DMSO, or oxygen as acceptors. The larger the difference in midpoint potential between an oxidizing and reducing agent, the more energy is released when they react. Out of these compounds, the succinate/fumarate pair is unusual, as its midpoint potential is close to zero. Succinate can therefore be oxidized to fumarate if a strong oxidizing agent such as oxygen is available, or fumarate can be reduced to succinate using a strong reducing agent such as formate. These alternative reactions are catalyzed by succinate dehydrogenase and fumarate reductase, respectively. Some prokaryotes use redox pairs that have only a small difference in midpoint potential. For example, nitrifying bacteria such as Nitrobacter oxidize nitrite to nitrate, donating the electrons to oxygen. The small amount of energy released in this reaction is enough to pump protons and generate ATP, but not enough to produce NADH or NADPH directly for use in anabolism. This problem is solved by using a nitrite oxidoreductase to produce enough proton-motive force to run part of the electron transport chain in reverse, causing complex I to generate NADH. Prokaryotes control their use of these electron donors and acceptors by varying which enzymes are produced, in response to environmental conditions. This flexibility is possible because different oxidases and reductases use the same ubiquinone pool. This allows many combinations of enzymes to function together, linked by the common ubiquinol intermediate. These respiratory chains therefore have a modular design, with easily interchangeable sets of enzyme systems. In addition to this metabolic diversity, prokaryotes also possess a range of isozymes – different enzymes that catalyze the same reaction. For example, in E. coli, there are two different types of ubiquinol oxidase using oxygen as an electron acceptor. Under highly aerobic conditions, the cell uses an oxidase with a low affinity for oxygen that can transport two protons per electron. However, if levels of oxygen fall, they switch to an oxidase that transfers only one proton per electron, but has a high affinity for oxygen. ATP synthase (complex V) ATP synthase, also called complex V, is the final enzyme in the oxidative phosphorylation pathway. This enzyme is found in all forms of life and functions in the same way in both prokaryotes and eukaryotes. The enzyme uses the energy stored in a proton gradient across a membrane to drive the synthesis of ATP from ADP and phosphate (Pi). Estimates of the number of protons required to synthesize one ATP have ranged from three to four, with some suggesting cells can vary this ratio, to suit different conditions. This phosphorylation reaction is an equilibrium, which can be shifted by altering the proton-motive force. In the absence of a proton-motive force, the ATP synthase reaction will run from right to left, hydrolyzing ATP and pumping protons out of the matrix across the membrane. However, when the proton-motive force is high, the reaction is forced to run in the opposite direction; it proceeds from left to right, allowing protons to flow down their concentration gradient and turning ADP into ATP. Indeed, in the closely related vacuolar type H+-ATPases, the hydrolysis reaction is used to acidify cellular compartments, by pumping protons and hydrolysing ATP. ATP synthase is a massive protein complex with a mushroom-like shape. The mammalian enzyme complex contains 16 subunits and has a mass of approximately 600 kilodaltons. The portion embedded within the membrane is called FO and contains a ring of c subunits and the proton channel. The stalk and the ball-shaped headpiece is called F1 and is the site of ATP synthesis. The ball-shaped complex at the end of the F1 portion contains six proteins of two different kinds (three α subunits and three β subunits), whereas the "stalk" consists of one protein: the γ subunit, with the tip of the stalk extending into the ball of α and β subunits. Both the α and β subunits bind nucleotides, but only the β subunits catalyze the ATP synthesis reaction. Reaching along the side of the F1 portion and back into the membrane is a long rod-like subunit that anchors the α and β subunits into the base of the enzyme. As protons cross the membrane through the channel in the base of ATP synthase, the FO proton-driven motor rotates. Rotation might be caused by changes in the ionization of amino acids in the ring of c subunits causing electrostatic interactions that propel the ring of c subunits past the proton channel. This rotating ring in turn drives the rotation of the central axle (the γ subunit stalk) within the α and β subunits. The α and β subunits are prevented from rotating themselves by the side-arm, which acts as a stator. This movement of the tip of the γ subunit within the ball of α and β subunits provides the energy for the active sites in the β subunits to undergo a cycle of movements that produces and then releases ATP. This ATP synthesis reaction is called the binding change mechanism and involves the active site of a β subunit cycling between three states. In the "open" state, ADP and phosphate enter the active site (shown in brown in the diagram). The protein then closes up around the molecules and binds them loosely – the "loose" state (shown in red). The enzyme then changes shape again and forces these molecules together, with the active site in the resulting "tight" state (shown in pink) binding the newly produced ATP molecule with very high affinity. Finally, the active site cycles back to the open state, releasing ATP and binding more ADP and phosphate, ready for the next cycle. In some bacteria and archaea, ATP synthesis is driven by the movement of sodium ions through the cell membrane, rather than the movement of protons. Archaea such as Methanococcus also contain the A1Ao synthase, a form of the enzyme that contains additional proteins with little similarity in sequence to other bacterial and eukaryotic ATP synthase subunits. It is possible that, in some species, the A1Ao form of the enzyme is a specialized sodium-driven ATP synthase, but this might not be true in all cases. Oxidative phosphorylation - energetics The transport of electrons from redox pair NAD+/ NADH to the final redox pair 1/2 O2/ H2O can be summarized as 1/2 O2 + NADH + H+ → H2O + NAD+ The potential difference between these two redox pairs is 1.14 volt, which is equivalent to -52 kcal/mol or -2600 kJ per 6 mol of O2. When one NADH is oxidized through the electron transfer chain, three ATPs are produced, which is equivalent to 7.3 kcal/mol x 3 = 21.9 kcal/mol. The conservation of the energy can be calculated by the following formula Efficiency = (21.9 x 100%) / 52 = 42% So we can conclude that when NADH is oxidized, about 42% of energy is conserved in the form of three ATPs and the remaining (58%) energy is lost as heat (unless the chemical energy of ATP under physiological conditions was underestimated). Reactive oxygen species Molecular oxygen is a good terminal electron acceptor because it is a strong oxidizing agent. The reduction of oxygen does involve potentially harmful intermediates. Although the transfer of four electrons and four protons reduces oxygen to water, which is harmless, transfer of one or two electrons produces superoxide or peroxide anions, which are dangerously reactive. These reactive oxygen species and their reaction products, such as the hydroxyl radical, are very harmful to cells, as they oxidize proteins and cause mutations in DNA. This cellular damage may contribute to disease and is proposed as one cause of aging. The cytochrome c oxidase complex is highly efficient at reducing oxygen to water, and it releases very few partly reduced intermediates; however small amounts of superoxide anion and peroxide are produced by the electron transport chain. Particularly important is the reduction of coenzyme Q in complex III, as a highly reactive ubisemiquinone free radical is formed as an intermediate in the Q cycle. This unstable species can lead to electron "leakage" when electrons transfer directly to oxygen, forming superoxide. As the production of reactive oxygen species by these proton-pumping complexes is greatest at high membrane potentials, it has been proposed that mitochondria regulate their activity to maintain the membrane potential within a narrow range that balances ATP production against oxidant generation. For instance, oxidants can activate uncoupling proteins that reduce membrane potential. To counteract these reactive oxygen species, cells contain numerous antioxidant systems, including antioxidant vitamins such as vitamin C and vitamin E, and antioxidant enzymes such as superoxide dismutase, catalase, and peroxidases, which detoxify the reactive species, limiting damage to the cell. In hypoxic/anoxic conditions As oxygen is fundamental for oxidative phosphorylation, a shortage in O2 level can alter ATP production rates. The proton motive force and ATP production can be maintained by intracellular acidosis. Cytosolic protons that have accumulated with ATP hydrolysis and lactic acidosis can freely diffuse across the mitochondrial outer-membrane and acidify the inter-membrane space, hence directly contributing to the proton motive force and ATP production. When exposed to hypoxia/anoxia (no oxygen), most animals will see damage done to their mitochondria. From some species, these conditions can happen due to environmental variables, such as low tides, low temperatures, or general living conditions, like living in a hypoxic underground burrow. In humans, these conditions are commonly met in medical emergencies such as strokes, ischemia, and asphyxia. Despite this, or perhaps due to it, some species have developed their own defense mechanisms against anoxia/hypoxia, as well as during reperfusion/reoxygenation. These mechanisms are diverse and differ between endotherms and ectotherms and can differ even at the species level. Endotherms Hypoxia/anoxia intolerance Most mammals and birds are intolerant to low/no oxygen conditions. For the heart, in the absence of oxygen, the first four complexes of the electron transport chain decrease in activity. This will lead to protons leaking through the inner mitochondrial membrane without complexes I, III, and IV pushing protons back through to maintain the proton gradient. There is also electron leak (an event where electrons leak out of the electron transport chain), which happens because NADH dehydrogenase within Complex I becomes damaged, which allows for the production of ROS (reactive oxygen species) during ischemia. This will lead to the reversing of Complex V, which forces protons from the matrix back into the inner membrane space, against their concentration gradient. Forcing protons against their concentration gradient requires energy, so Complex V uses up ATP as an energy source. Reoxygenation of intolerant animals When oxygen re-enters the system, animals are faced with a different set of problems. Since ATP was used up during the anoxic period, it leads to a lack of ADP within the system. This is due to ADP's natural degradation into AMP, resulting in ADP being drained from the system. With no ADP in the system, Complex V is unable to start, meaning the protons will not flow through it to enter the matrix. Due to Complex V's reversal during anoxia, the proton gradient has become hyperpolarized (where the proton gradient is highly positively charged). Another factor in this problem is that succinate built up during anoxia, so when oxygen is reintroduced, succinate donates electrons to Complex II. The hyperpolarized gradient and succinate buildup leads to reverse electron transport, causing oxidative stress, which can lead to cellular damage and diseases. Hypoxia/anoxia tolerance The naked mole rat (Heterocephalus glaber) is a hypoxia-tolerant species that sleeps in deep burrows and in large colonies. The depth of these burrows reduces access to oxygen, and sleeping in large groups will deplete the area of oxygen quicker than usual, leading to hypoxia. The naked mole rat has the unique ability to survive low oxygen conditions for no less than several hours, and zero oxygen conditions for 18 minutes. One of the ways of combatting hypoxia in the brain is decreasing the reliance on oxygen for ATP production, achieved by decreased respiration rates and proton leak. Reoxygenation of tolerant animals Hypoxia/anoxia tolerant species handle ROS production during reoxygenation better than the intolerant. In the cortex of the naked mole rats, they show better homeostasis of ROS production than intolerant species and seem to lack the burst of ROS that typically comes with reoxygenation. Ectotherms Hypoxia/anoxia intolerance Research on intolerant ectotherms is more limited than on tolerant ectotherms and intolerant endotherms, but it is shown that anoxia/hypoxia intolerance is different in terms for how long the intolerant survive as opposed to the tolerant between endotherms and ectotherms. While intolerant endotherms only last minutes, intolerant ectotherms can last hours, such as subtidal scallops (Argopecten irradians). This difference in intolerance could be due to a couple of different factors. One advantage is that the ectothermic inner mitochondrial membrane is less leaky, so less protons will leak through the inner membrane due to differences in the phospholipid bilayer composition. Another advantage ectotherms tend to have in this category is an ability for their mitochondria to properly function in a wide range of temperatures, such as the western fence lizard (Sceloporus occidentalis). While western fence lizards are not considered a hypoxia-tolerant animal, they still showed less temperature sensitivity in their mitochondria than mice mitochondria. Reoxygenation of intolerant animals While it is unclear how reoxygenation affects intolerant ectotherms at the mitochondrial level, there is some research showing how some of them respond. In the hypoxia-sensitive shovelnose ray (Aptychotrema rostrata), it is shown that ROS production is lower upon reoxygenation compared to rays only exposed to normoxia (normal oxygen levels). This differs from the hypoxia-sensitive endotherm, which would see an increase in ROS production. However, the ray's levels were still higher than the more hypoxia-tolerant Epaulette shark (Hemiscyllum ocellatum), which potentially sees hypoxia due to the bouts of low tides that can be seen in reef platforms. Subtidal scallops will see both a decrease in maximal respiration and a depolarization of the membrane during reoxygenation. Hypoxia/anoxia tolerance Hypoxia/Anoxia tolerant ectotherms have shown unique strategies for surviving anoxia. Pond turtles, such as the painted turtle (Chrysemys picta bellii), will experience anoxia during winter while they overwinter at the bottom of frozen ponds. In their cardiac mitochondria, the reversing of Complex V, the usage of ATP, and the build-up of succinate are all prevented during anoxia. Crucian carps (Carassius carassius) also overwinter in frozen ponds and show no loss membrane potential in their cardiac mitochondria during anoxia, but this relies on complexes I and III to be active. Reoxygenation of tolerant animals Pond turtles are able to completely avoid ROS production upon reoxygenation. However, crucian carp cannot and are unable to prevent the death of brain cells upon reoxygenation. Inhibitors There are several well-known drugs and toxins that inhibit oxidative phosphorylation. Although any one of these toxins inhibits only one enzyme in the electron transport chain, inhibition of any step in this process will halt the rest of the process. For example, if oligomycin inhibits ATP synthase, protons cannot pass back into the mitochondrion. As a result, the proton pumps are unable to operate, as the gradient becomes too strong for them to overcome. NADH is then no longer oxidized and the citric acid cycle ceases to operate because the concentration of NAD+ falls below the concentration that these enzymes can use. Many site-specific inhibitors of the electron transport chain have contributed to the present knowledge of mitochondrial respiration. Synthesis of ATP is also dependent on the electron transport chain, so all site-specific inhibitors also inhibit ATP formation. The fish poison rotenone, the barbiturate drug amytal, and the antibiotic piericidin A inhibit NADH and coenzyme Q. Carbon monoxide, cyanide, hydrogen sulphide and azide effectively inhibit cytochrome oxidase. Carbon monoxide reacts with the reduced form of the cytochrome while cyanide and azide react with the oxidised form. An antibiotic, antimycin A, and British anti-Lewisite, an antidote used against chemical weapons, are the two important inhibitors of the site between cytochrome B and C1. Not all inhibitors of oxidative phosphorylation are toxins. In brown adipose tissue, regulated proton channels called uncoupling proteins can uncouple respiration from ATP synthesis. This rapid respiration produces heat, and is particularly important as a way of maintaining body temperature for hibernating animals, although these proteins may also have a more general function in cells' responses to stress. History The field of oxidative phosphorylation began with the report in 1906 by Arthur Harden of a vital role for phosphate in cellular fermentation, but initially only sugar phosphates were known to be involved. However, in the early 1940s, the link between the oxidation of sugars and the generation of ATP was firmly established by Herman Kalckar, confirming the central role of ATP in energy transfer that had been proposed by Fritz Albert Lipmann in 1941. Later, in 1949, Morris Friedkin and Albert L. Lehninger proved that the coenzyme NADH linked metabolic pathways such as the citric acid cycle and the synthesis of ATP. The term oxidative phosphorylation was coined by in 1939. For another twenty years, the mechanism by which ATP is generated remained mysterious, with scientists searching for an elusive "high-energy intermediate" that would link oxidation and phosphorylation reactions. This puzzle was solved by Peter D. Mitchell with the publication of the chemiosmotic theory in 1961. At first, this proposal was highly controversial, but it was slowly accepted and Mitchell was awarded a Nobel prize in 1978. Subsequent research concentrated on purifying and characterizing the enzymes involved, with major contributions being made by David E. Green on the complexes of the electron-transport chain, as well as Efraim Racker on the ATP synthase. A critical step towards solving the mechanism of the ATP synthase was provided by Paul D. Boyer, by his development in 1973 of the "binding change" mechanism, followed by his radical proposal of rotational catalysis in 1982. More recent work has included structural studies on the enzymes involved in oxidative phosphorylation by John E. Walker, with Walker and Boyer being awarded a Nobel Prize in 1997.
Biology and health sciences
Metabolic processes
Biology
22780
https://en.wikipedia.org/wiki/Octopus
Octopus
An octopus (: octopuses or octopodes) is a soft-bodied, eight-limbed mollusc of the order Octopoda (, ). The order consists of some 300 species and is grouped within the class Cephalopoda with squids, cuttlefish, and nautiloids. Like other cephalopods, an octopus is bilaterally symmetric with two eyes and a beaked mouth at the centre point of the eight limbs. The soft body can radically alter its shape, enabling octopuses to squeeze through small gaps. They trail their eight appendages behind them as they swim. The siphon is used both for respiration and for locomotion, by expelling a jet of water. Octopuses have a complex nervous system and excellent sight, and are among the most intelligent and behaviourally diverse of all invertebrates. Octopuses inhabit various regions of the ocean, including coral reefs, pelagic waters, and the seabed; some live in the intertidal zone and others at abyssal depths. Most species grow quickly, mature early, and are short-lived. In most species, the male uses a specially adapted arm to deliver a bundle of sperm directly into the female's mantle cavity, after which he becomes senescent and dies, while the female deposits fertilised eggs in a den and cares for them until they hatch, after which she also dies. Strategies to defend themselves against predators include the expulsion of ink, the use of camouflage and threat displays, the ability to jet quickly through the water and hide, and even deceit. All octopuses are venomous, but only the blue-ringed octopuses are known to be deadly to humans. Octopuses appear in mythology as sea monsters like the kraken of Norway and the Akkorokamui of the Ainu, and possibly the Gorgon of ancient Greece. A battle with an octopus appears in Victor Hugo's book Toilers of the Sea, inspiring other works such as Ian Fleming's Octopussy. Octopuses appear in Japanese erotic art, shunga. They are eaten and considered a delicacy by humans in many parts of the world, especially the Mediterranean and the Asian seas. Etymology and pluralisation The scientific Latin term was derived from Ancient Greek (), a compound form of (, 'eight') and (, 'foot'), itself a variant form of , a word used for example by Alexander of Tralles ( – ) for the common octopus. The standard pluralised form of octopus in English is octopuses; the Ancient Greek plural , (), has also been used historically. The alternative plural octopi is usually considered incorrect because it wrongly assumes that octopus is a Latin second-declension noun or adjective when, in either Greek or Latin, it is a third-declension noun. Historically, the first plural to commonly appear in English language sources, in the early 19th century, is the Latinate form octopi, followed by the English form octopuses in the latter half of the same century. The Hellenic plural is roughly contemporary in usage, although it is also the rarest. Fowler's Modern English Usage states that the only acceptable plural in English is octopuses, that octopi is misconceived, and octopodes pedantic; the last is nonetheless used frequently enough to be acknowledged by the descriptivist Merriam-Webster 11th Collegiate Dictionary and Webster's New World College Dictionary. The Oxford English Dictionary lists octopuses, octopi, and octopodes, in that order, reflecting frequency of use, calling octopodes rare and noting that octopi is based on a misunderstanding. The New Oxford American Dictionary (3rd Edition, 2010) lists octopuses as the only acceptable pluralisation, and indicates that octopodes is still occasionally used, but that octopi is incorrect. Anatomy and physiology Size The giant Pacific octopus (Enteroctopus dofleini) is often cited as the largest known octopus species. Adults usually weigh around , with an arm span of up to . The largest specimen of this species to be scientifically documented was an animal with a live mass of . Much larger sizes have been claimed for the giant Pacific octopus: one specimen was recorded as with an arm span of . A carcass of the seven-arm octopus, Haliphron atlanticus, weighed and was estimated to have had a live mass of . The smallest species is Octopus wolfi, which is around and weighs less than . External characteristics The octopus is bilaterally symmetrical along its dorso-ventral (back to belly) axis; the head and foot are at one end of an elongated body and function as the anterior (front) of the animal. The head includes the mouth and brain. The foot has evolved into a set of flexible, prehensile appendages, known as "arms", that surround the mouth and are attached to each other near their base by a webbed structure. The arms can be described based on side and sequence position (such as L1, R1, L2, R2) and divided into four pairs. The two rear appendages are generally used to walk on the sea floor, while the other six are used to forage for food. The bulbous and hollow mantle is fused to the back of the head and is known as the visceral hump; it contains most of the vital organs. The mantle cavity has muscular walls and contains the gills; it is connected to the exterior by a funnel or siphon. The mouth of an octopus, located underneath the arms, has a sharp hard beak. The skin consists of a thin outer epidermis with mucous cells and sensory cells and a connective tissue dermis consisting largely of collagen fibres and various cells allowing colour change. Most of the body is made of soft tissue allowing it to lengthen, contract, and contort itself. The octopus can squeeze through tiny gaps; even the larger species can pass through an opening close to in diameter. Lacking skeletal support, the arms work as muscular hydrostats and contain longitudinal, transverse and circular muscles around a central axial nerve. They can extend and contract, twist to left or right, bend at any place in any direction or be held rigid. The interior surfaces of the arms are covered with circular, adhesive suckers. The suckers allow the octopus to anchor itself or to manipulate objects. Each sucker is usually circular and bowl-like and has two distinct parts: an outer shallow cavity called an infundibulum and a central hollow cavity called an acetabulum, both of which are thick muscles covered in a protective chitinous cuticle. When a sucker attaches to a surface, the orifice between the two structures is sealed. The infundibulum provides adhesion while the acetabulum remains free, and muscle contractions allow for attachment and detachment. Each of the eight arms senses and responds to light, allowing the octopus to control the limbs even if its head is obscured. The eyes of the octopus are large and at the top of the head. They are similar in structure to those of a fish, and are enclosed in a cartilaginous capsule fused to the cranium. The cornea is formed from a translucent epidermal layer; the slit-shaped pupil forms a hole in the iris just behind the cornea. The lens is suspended behind the pupil; photoreceptive retinal cells cover the back of the eye. The pupil can be adjusted in size; a retinal pigment screens incident light in bright conditions. Some species differ in form from the typical octopus body shape. Basal species, the Cirrina, have stout gelatinous bodies with webbing that reaches near the tip of their arms, and two large fins above the eyes, supported by an internal shell. Fleshy papillae or cirri are found along the bottom of the arms, and the eyes are more developed. Circulatory system Octopuses have a closed circulatory system, in which the blood remains inside blood vessels. Octopuses have three hearts; a systemic or main heart that circulates blood around the body and two branchial or gill hearts that pump it through each of the two gills. The systemic heart becomes inactive when the animal is swimming. Thus the octopus tires quickly and prefers to crawl. Octopus blood contains the copper-rich protein haemocyanin to transport oxygen. This makes the blood very viscous and it requires considerable pressure to pump it around the body; octopuses' blood pressures can exceed . In cold conditions with low oxygen levels, haemocyanin transports oxygen more efficiently than haemoglobin. The haemocyanin is dissolved in the plasma instead of being carried within blood cells and gives the blood a bluish colour. The systemic heart has muscular contractile walls and consists of a single ventricle and two atria, one for each side of the body. The blood vessels consist of arteries, capillaries and veins and are lined with a cellular endothelium which is quite unlike that of most other invertebrates. The blood circulates through the aorta and capillary system, to the venae cavae, after which the blood is pumped through the gills by the branchial hearts and back to the main heart. Much of the venous system is contractile, which helps circulate the blood. Respiration Respiration involves drawing water into the mantle cavity through an aperture, passing it through the gills, and expelling it through the siphon. The ingress of water is achieved by contraction of radial muscles in the mantle wall, and flapper valves shut when strong circular muscles force the water out through the siphon. Extensive connective tissue lattices support the respiratory muscles and allow them to expand the respiratory chamber. The lamella structure of the gills allows for a high oxygen uptake, up to 65% in water at . Water flow over the gills correlates with locomotion, and an octopus can propel its body when it expels water out of its siphon. The thin skin of the octopus absorbs additional oxygen. When resting, around 41% of an octopus's oxygen absorption is through the skin. This decreases to 33% when it swims, as more water flows over the gills; skin oxygen uptake also increases. When it is resting after a meal, absorption through the skin can drop to 3% of its total oxygen uptake. Digestion and excretion The digestive system of the octopus begins with the buccal mass which consists of the mouth with its chitinous beak, the pharynx, radula and salivary glands. The radula is a spiked, muscular tongue-like organ with multiple rows of tiny teeth. Food is broken down and is forced into the oesophagus by two lateral extensions of the esophageal side walls in addition to the radula. From there it is transferred to the gastrointestinal tract, which is mostly suspended from the roof of the mantle cavity by numerous membranes. The tract consists of a crop, where the food is stored; a stomach, where food is ground down; a caecum where the now sludgy food is sorted into fluids and particles and which plays an important role in absorption; the digestive gland, where liver cells break down and absorb the fluid and become "brown bodies"; and the intestine, where the accumulated waste is turned into faecal ropes by secretions and blown out of the funnel via the rectum. During osmoregulation, fluid is added to the pericardia of the branchial hearts. The octopus has two nephridia (equivalent to vertebrate kidneys) which are associated with the branchial hearts; these and their associated ducts connect the pericardial cavities with the mantle cavity. Before reaching the branchial heart, each branch of the vena cava expands to form renal appendages which are in direct contact with the thin-walled nephridium. The urine is first formed in the pericardial cavity, and is modified by excretion, chiefly of ammonia, and selective absorption from the renal appendages, as it is passed along the associated duct and through the nephridiopore into the mantle cavity. Nervous system and senses Octopuses (along with cuttlefish) have the highest brain-to-body mass ratios of all invertebrates; this is greater than that of many vertebrates. Octopuses have the same jumping genes that are active in the human brain, implying an evolutionary convergence at molecular level. The nervous system is complex, only part of which is localised in its brain, which is contained in a cartilaginous capsule. Two-thirds of an octopus's neurons are in the nerve cords of its arms. This allows their arms to perform complex reflex actions without input from the brain. Unlike vertebrates, the complex motor skills of octopuses are not organised in their brains via internal somatotopic maps of their bodies. The nervous system of cephalopods is the most complex of all invertebrates. The giant nerve fibers of the cephalopod mantle have been widely used for many years as experimental material in neurophysiology; their large diameter (due to lack of myelination) makes them relatively easy to study compared with other animals. Like other cephalopods, octopuses have camera-like eyes, and can distinguish the polarisation of light. Colour vision appears to vary from species to species, for example, being present in O. aegina but absent in O. vulgaris. Opsins in the skin respond to different wavelengths of light and help the animals choose a colouration that camouflages them; the chromatophores in the skin can respond to light independently of the eyes. An alternative hypothesis is that cephalopod eyes in species that only have a single photoreceptor protein may use chromatic aberration to turn monochromatic vision into colour vision, though this sacrifices image quality. This would explain pupils shaped like the letter "U", the letter "W", or a dumbbell, as well as the need for colourful mating displays. Attached to the brain are two organs called statocysts (sac-like structures containing a mineralised mass and sensitive hairs), that allow the octopus to sense the orientation of its body. They provide information on the position of the body relative to gravity and can detect angular acceleration. An autonomic response keeps the octopus's eyes oriented so that the pupil is always horizontal. Octopuses may also use the statocyst to hear sound. The common octopus can hear sounds between 400 Hz and 1000 Hz, and hears best at 600 Hz. Octopuses have an excellent somatosensory system. Their suction cups are equipped with chemoreceptors so they can taste what they touch. Octopus arms move easily because the sensors recognise octopus skin and prevent self-attachment. Octopuses appear to have poor proprioceptive sense and must observe the arms visually to keep track of their position. Ink sac The ink sac of an octopus is located under the digestive gland. A gland attached to the sac produces the ink, and the sac stores it. The sac is close enough to the funnel for the octopus to shoot out the ink with a water jet. Before it leaves the funnel, the ink passes through glands which mix it with mucus, creating a thick, dark blob which allows the animal to escape from a predator. The main pigment in the ink is melanin, which gives it its black colour. Cirrate octopuses usually lack the ink sac. Life cycle Reproduction Octopuses are gonochoric and have a single, posteriorly-located gonad which is associated with the coelom. The testis in males and the ovary in females bulges into the gonocoel and the gametes are released here. The gonocoel is connected by the gonoduct to the mantle cavity, which it enters at the gonopore. An optic gland creates hormones that cause the octopus to mature and age and stimulate gamete production. The gland may be triggered by environmental conditions such as temperature, light and nutrition, which thus control the timing of reproduction and lifespan. When octopuses reproduce, the male uses a specialised arm called a hectocotylus to transfer spermatophores (packets of sperm) from the terminal organ of the reproductive tract (the cephalopod "penis") into the female's mantle cavity. The hectocotylus in benthic octopuses is usually the third right arm, which has a spoon-shaped depression and modified suckers near the tip. In most species, fertilisation occurs in the mantle cavity. The reproduction of octopuses has been studied in only a few species. One such species is the giant Pacific octopus, in which courtship is accompanied, especially in the male, by changes in skin texture and colour. The male may cling to the top or side of the female or position himself beside her. There is some speculation that he may first use his hectocotylus to remove any spermatophore or sperm already present in the female. He picks up a spermatophore from his spermatophoric sac with the hectocotylus, inserts it into the female's mantle cavity, and deposits it in the correct location for the species, which in the giant Pacific octopus is the opening of the oviduct. Two spermatophores are transferred in this way; these are about one metre (yard) long, and the empty ends may protrude from the female's mantle. A complex hydraulic mechanism releases the sperm from the spermatophore, and it is stored internally by the female. About forty days after mating, the female giant Pacific octopus attaches strings of small fertilised eggs (10,000 to 70,000 in total) to rocks in a crevice or under an overhang. Here she guards and cares for them for about five months (160 days) until they hatch. In colder waters, such as those off Alaska, it may take up to ten months for the eggs to completely develop. The female aerates them and keeps them clean; if left untended, many will die. She does not feed during this time and dies soon after. Males become senescent and die a few weeks after mating. The eggs have large yolks; cleavage (division) is superficial and a germinal disc develops at the pole. During gastrulation, the margins of this grow down and surround the yolk, forming a yolk sac, which eventually forms part of the gut. The dorsal side of the disc grows upward and forms the embryo, with a shell gland on its dorsal surface, gills, mantle and eyes. The arms and funnel develop as part of the foot on the ventral side of the disc. The arms later migrate upward, coming to form a ring around the funnel and mouth. The yolk is gradually absorbed as the embryo develops. Most young octopuses hatch as paralarvae and are planktonic for weeks to months, depending on the species and water temperature. They feed on copepods, arthropod larvae and other zooplankton, eventually settling on the ocean floor and developing directly into adults with no distinct metamorphoses that are present in other groups of mollusc larvae. Octopus species that produce larger eggs – including the southern blue-ringed, Caribbean reef, California two-spot, Eledone moschata and deep sea octopuses – instead hatch as benthic animals similar to the adults. In the argonaut (paper nautilus), the female secretes a fine, fluted, papery shell in which the eggs are deposited and in which she also resides while floating in mid-ocean. In this she broods the young, and it also serves as a buoyancy aid allowing her to adjust her depth. The male argonaut is minute by comparison and has no shell. Lifespan Octopuses have short lifespans, and some species complete their lifecycles in only six months. The giant Pacific octopus, one of the two largest species of octopus, usually lives for three to five years. Octopus lifespan is limited by reproduction. For most octopuses, the last stage of their life is called senescence. It is the breakdown of cellular function without repair or replacement. For males, this typically begins after mating. Senescence may last from weeks to a few months, at most. For females, it begins when they lay a clutch of eggs. Females will spend all their time aerating and protecting their eggs until they are ready to hatch. During senescence, an octopus does not feed and quickly weakens. Lesions begin to form and the octopus literally degenerates. Unable to defend themselves, octopuses often fall prey to predators. This makes most octopuses effectively semelparous. The larger Pacific striped octopus (LPSO) is an exception, as it can reproduce repeatedly over a life of around two years. Octopus reproductive organs mature due to the hormonal influence of the optic gland but result in the inactivation of their digestive glands. Unable to feed, the octopus typically dies of starvation. Experimental removal of both optic glands after spawning was found to result in the cessation of broodiness, the resumption of feeding, increased growth, and greatly extended lifespans. It has been proposed that the naturally short lifespan may be functional to prevent rapid overpopulation. Distribution and habitat Octopuses live in every ocean, and different species have adapted to different marine habitats. As juveniles, common octopuses inhabit shallow tide pools. The Hawaiian day octopus (Octopus cyanea) lives on coral reefs; argonauts drift in pelagic waters. Abdopus aculeatus mostly lives in near-shore seagrass beds. Some species are adapted to the cold, ocean depths. The spoon-armed octopus (Bathypolypus arcticus) is found at depths of , and Vulcanoctopus hydrothermalis lives near hydrothermal vents at . The cirrate species are often free-swimming and live in deep-water habitats. Although several species are known to live at bathyal and abyssal depths, there is only a single indisputable record of an octopus in the hadal zone; a species of Grimpoteuthis (dumbo octopus) photographed at . No species are known to live in fresh water. Behaviour and ecology Most species are solitary when not mating, though a few are known to occur in high densities and with frequent interactions, such as signaling, mate defending and evicting individuals from dens. This is likely the result of abundant food supplies combined with limited den sites. The LPSO has been described as particularly social, living in groups of up to 40 individuals. Octopuses hide in dens, which are typically crevices in rocky outcrops or other hard structures, though some species burrow into sand or mud. Octopuses are not territorial but generally remain in a home range; they may leave in search of food. They can navigate back to a den without having to retrace their outward route. They are not migratory. Octopuses bring captured prey to the den, where they can eat it safely. Sometimes the octopus catches more prey than it can eat, and the den is often surrounded by a midden of dead and uneaten food items. Other creatures, such as fish, crabs, molluscs and echinoderms, often share the den with the octopus, either because they have arrived as scavengers, or because they have survived capture. On rare occasions, octopuses hunt cooperatively with other species, with fish as their partners. They regulate the species composition of the hunting and the behavior of their by punching them. Feeding Nearly all octopuses are predatory; bottom-dwelling octopuses eat mainly crustaceans, polychaete worms, and other molluscs such as whelks and clams; open-ocean octopuses eat mainly prawns, fish and other cephalopods. Major items in the diet of the giant Pacific octopus include bivalve molluscs such as the cockle Clinocardium nuttallii, clams and scallops and crustaceans such as crabs and spider crabs. Prey that it is likely to reject include moon snails because they are too large and limpets, rock scallops, chitons and abalone, because they are too securely fixed to the rock. Small cirrate octopuses such as those of the genera Grimpoteuthis and Opisthoteuthis typically prey on polychaetes, copepods, amphipods and isopods. A benthic (bottom-dwelling) octopus typically moves among the rocks and feels through the crevices. The creature may make a jet-propelled pounce on prey and pull it toward the mouth with its arms, the suckers restraining it. Small prey may be completely trapped by the webbed structure. Octopuses usually inject crustaceans like crabs with a paralysing saliva then dismember them with their beaks. Octopuses feed on shelled molluscs either by forcing the valves apart, or by drilling a hole in the shell to inject a nerve toxin. It used to be thought that the hole was drilled by the radula, but it has now been shown that minute teeth at the tip of the salivary papilla are involved, and an enzyme in the toxic saliva is used to dissolve the calcium carbonate of the shell. It takes about three hours for O. vulgaris to create a hole. Once the shell is penetrated, the prey dies almost instantaneously, its muscles relax, and the soft tissues are easy for the octopus to remove. Crabs may also be treated in this way; tough-shelled species are more likely to be drilled, and soft-shelled crabs are torn apart. Some species have other modes of feeding. Grimpoteuthis has a reduced or non-existent radula and swallows prey whole. In the deep-sea genus Stauroteuthis, some of the muscle cells that control the suckers in most species have been replaced with photophores which are believed to fool prey by directing them to the mouth, making them one of the few bioluminescent octopuses. Locomotion Octopuses mainly move about by relatively slow crawling with some swimming in a head-first position. Jet propulsion or backward swimming, is their fastest means of locomotion, followed by swimming and crawling. When in no hurry, they usually crawl on either solid or soft surfaces. Several arms are extended forward, some of the suckers adhere to the substrate and the animal hauls itself forward with its powerful arm muscles, while other arms may push rather than pull. As progress is made, other arms move ahead to repeat these actions and the original suckers detach. During crawling, the heart rate nearly doubles, and the animal requires 10 or 15 minutes to recover from relatively minor exercise. Most octopuses swim by expelling a jet of water from the mantle through the siphon into the sea. The physical principle behind this is that the force required to accelerate the water through the orifice produces a reaction that propels the octopus in the opposite direction. The direction of travel depends on the orientation of the siphon. When swimming, the head is at the front and the siphon is pointed backward but, when jetting, the visceral hump leads, the siphon points at the head and the arms trail behind, with the animal presenting a fusiform appearance. In an alternative method of swimming, some species flatten themselves dorso-ventrally, and swim with the arms held out sideways; this may provide lift and be faster than normal swimming. Jetting is used to escape from danger, but is physiologically inefficient, requiring a mantle pressure so high as to stop the heart from beating, resulting in a progressive oxygen deficit. Cirrate octopuses cannot produce jet propulsion and rely on their fins for swimming. They have neutral buoyancy and drift through the water with the fins extended. They can also contract their arms and surrounding web to make sudden moves known as "take-offs". Another form of locomotion is "pumping", which involves symmetrical contractions of muscles in their webs producing peristaltic waves. This moves the body slowly. In 2005, Adopus aculeatus and veined octopus (Amphioctopus marginatus) were found to walk on two arms, while at the same time mimicking plant matter. This form of locomotion allows these octopuses to move quickly away from a potential predator without being recognised. Some species of octopus can crawl out of the water briefly, which they may do between tide pools. "Stilt walking" is used by the veined octopus when carrying stacked coconut shells. The octopus carries the shells underneath it with two arms, and progresses with an ungainly gait supported by its remaining arms held rigid. Intelligence Octopuses are highly intelligent. Maze and problem-solving experiments have shown evidence of a memory system that can store both short- and long-term memory. Young octopuses learn nothing from their parents, as adults provide no parental care beyond tending to their eggs until the young octopuses hatch. In laboratory experiments, octopuses can readily be trained to distinguish between different shapes and patterns. They have been reported to practise observational learning, although the validity of these findings is contested. Octopuses have also been observed in what has been described as play: repeatedly releasing bottles or toys into a circular current in their aquariums and then catching them. Octopuses often break out of their aquariums and sometimes into others in search of food. Growing evidence suggests that octopuses are sentient and capable of experiencing pain. The veined octopus collects discarded coconut shells, then uses them to build a shelter, an example of tool use. Camouflage and colour change Octopuses use camouflage when hunting and to avoid predators. To do this, they use specialised skin cells that change the appearance of the skin by adjusting its colour, opacity, or reflectivity. Chromatophores contain yellow, orange, red, brown, or black pigments; most species have three of these colours, while some have two or four. Other colour-changing cells are reflective iridophores and white leucophores. This colour-changing ability is also used to communicate with or warn other octopuses. The energy cost of the complete activation of the chromatophore system is very high equally being nearly as much as all the energy used by an octopus at rest. Octopuses can create distracting patterns with waves of dark colouration across the body, a display known as the "passing cloud". Muscles in the skin change the texture of the mantle to achieve greater camouflage. In some species, the mantle can take on the spiky appearance of algae; in others, skin anatomy is limited to relatively uniform shades of one colour with limited skin texture. Octopuses that are diurnal and live in shallow water have evolved more complex skin than their nocturnal and deep-sea counterparts. A "moving rock" trick involves the octopus mimicking a rock and then inching across the open space with a speed matching that of the surrounding water. Defence Aside from humans, octopuses may be preyed on by fishes, seabirds, sea otters, pinnipeds, cetaceans, and other cephalopods. Octopuses typically hide or disguise themselves by camouflage and mimicry; some have conspicuous warning coloration (aposematism) or deimatic behaviour (“bluffing” a seemingly threatening appearance). An octopus may spend 40% of its time hidden away in its den. When the octopus is approached, it may extend an arm to investigate. 66% of Enteroctopus dofleini in one study had scars, with 50% having amputated arms. The blue rings of the highly venomous blue-ringed octopus are hidden in muscular skin folds which contract when the animal is threatened, exposing the iridescent warning. The Atlantic white-spotted octopus (Callistoctopus macropus) turns bright brownish red with oval white spots all over in a high contrast display. Displays are often reinforced by stretching out the animal's arms, fins or web to make it look as big and threatening as possible. Once they have been seen by a predator, they commonly try to escape but can also create a distraction by ejecting an ink cloud from their ink sac. The ink is thought to reduce the efficiency of olfactory organs, which would aid evasion from predators that employ smell for hunting, such as sharks. Ink clouds of some species might act as pseudomorphs, or decoys that the predator attacks instead. When under attack, some octopuses can perform arm autotomy, in a manner similar to the way skinks and other lizards detach their tails. The crawling arm may distract would-be predators. Such severed arms remain sensitive to stimuli and move away from unpleasant sensations. Octopuses can replace lost limbs. Some octopuses, such as the mimic octopus, can combine their highly flexible bodies with their colour-changing ability to mimic other, more dangerous animals, such as lionfish, sea snakes, and eels. Pathogens and parasites The diseases and parasites that affect octopuses have been little studied, but cephalopods are known to be the intermediate or final hosts of various parasitic cestodes, nematodes and copepods; 150 species of protistan and metazoan parasites have been recognised. The Dicyemidae are a family of tiny worms that are found in the renal appendages of many species; it is unclear whether they are parasitic or endosymbionts. Coccidians in the genus Aggregata living in the gut cause severe disease to the host. Octopuses have an innate immune system; their haemocytes respond to infection by phagocytosis, encapsulation, infiltration, or cytotoxic activities to destroy or isolate the pathogens. The haemocytes play an important role in the recognition and elimination of foreign bodies and wound repair. Captive animals are more susceptible to pathogens than wild ones. A gram-negative bacterium, Vibrio lentus, can cause skin lesions, exposure of muscle and sometimes death. Evolution The scientific name Octopoda was first coined and given as the order of octopuses in 1818 by English biologist William Elford Leach, who classified them as Octopoida the previous year. The Octopoda consists of around 300 known species and were historically divided into two suborders, the Incirrina and the Cirrina. More recent evidence suggests Cirrina is merely the most basal species, not a unique clade. The incirrate octopuses (the majority of species) lack the cirri and paired swimming fins of the cirrates. In addition, the internal shell of incirrates is either present as a pair of stylets or absent altogether. Fossil history and phylogeny The Cephalopoda evolved from a mollusc resembling the Monoplacophora in the Cambrian some 530 million years ago. The Coleoidea diverged from the nautiloids in the Devonian some 416 million years ago. In turn, the coleoids (including the squids and octopods) brought their shells inside the body and some 276 million years ago, during the Permian, split into the Vampyropoda and the Decabrachia. The octopuses arose from the Muensterelloidea within the Vampyropoda in the Jurassic. The earliest octopus likely lived near the sea floor (benthic to demersal) in shallow marine environments. Octopuses consist mostly of soft tissue, and so fossils are relatively rare. As soft-bodied cephalopods, they lack the external shell of most molluscs, including other cephalopods like the nautiloids and the extinct Ammonoidea. They have eight limbs like other Coleoidea, but lack the extra specialised feeding appendages known as tentacles which are longer and thinner with suckers only at their club-like ends. The vampire squid (Vampyroteuthis) also lacks tentacles but has sensory filaments. The cladograms are based on Sanchez et al., 2018, who created a molecular phylogeny based on mitochondrial and nuclear DNA marker sequences. The position of the Eledonidae is from Ibáñez et al., 2020, with a similar methodology. Dates of divergence are from Kröger et al., 2011 and Fuchs et al., 2019. The molecular analysis of the octopods shows that the suborder Cirrina (Cirromorphida) and the superfamily Argonautoidea are paraphyletic and are broken up; these names are shown in quotation marks and italics on the cladogram. RNA editing and the genome Octopuses, like other coleoid cephalopods but unlike more basal cephalopods or other molluscs, are capable of greater RNA editing, changing the nucleic acid sequence of the primary transcript of RNA molecules, than any other organisms. Editing is concentrated in the nervous system, and affects proteins involved in neural excitability and neuronal morphology. More than 60% of RNA transcripts for coleoid brains are recoded by editing, compared to less than 1% for a human or fruit fly. Coleoids rely mostly on ADAR enzymes for RNA editing, which requires large double-stranded RNA structures to flank the editing sites. Both the structures and editing sites are conserved in the coleoid genome and the mutation rates for the sites are severely hampered. Hence, greater transcriptome plasticity has come at the cost of slower genome evolution. The octopus genome is unremarkably bilaterian except for large developments of two gene families: protocadherins, which regulate the development of neurons; and the C2H2 zinc-finger transcription factors. Many genes specific to cephalopods are expressed in the animals' skin, suckers, and nervous system. Relationship to humans In art, literature, and mythology Ancient seafaring people were aware of the octopus, as evidenced by artworks and designs. For example, a stone carving found in the archaeological recovery from Bronze Age Minoan Crete at Knossos (1900–1100 BC) depicts a fisherman carrying an octopus. The terrifyingly powerful Gorgon of Greek mythology may have been inspired by the octopus or squid, the octopus itself representing the severed head of Medusa, the beak as the protruding tongue and fangs, and its tentacles as the snakes. The kraken is a legendary sea monster of giant proportions said to dwell off the coasts of Norway and Greenland, usually portrayed in art as a giant octopus attacking ships. Linnaeus included it in the first edition of his 1735 Systema Naturae. One translation of the Hawaiian creation myth the Kumulipo suggests that the octopus is the lone survivor of a previous age. The Akkorokamui is a gigantic octopus-like monster from Ainu folklore, worshipped in Shinto. A battle with an octopus plays a significant role in Victor Hugo's 1866 book Travailleurs de la mer (Toilers of the Sea). Ian Fleming's 1966 short story collection Octopussy and The Living Daylights, and the 1983 James Bond film were partly inspired by Hugo's book. Japanese erotic art, shunga, includes ukiyo-e woodblock prints such as Katsushika Hokusai's 1814 print Tako to ama (The Dream of the Fisherman's Wife), in which an ama diver is sexually intertwined with a large and a small octopus. The print is a forerunner of tentacle erotica. The biologist P. Z. Myers noted in his science blog, Pharyngula, that octopuses appear in "extraordinary" graphic illustrations involving women, tentacles, and bare breasts. Since it has numerous arms emanating from a common centre, the octopus is often used as a symbol for a powerful and manipulative organisation, company, or country. The Beatles song "Octopus's Garden", on the band's 1969 album Abbey Road, was written by Ringo Starr after he was told about how octopuses travel along the sea bed picking up stones and shiny objects with which to build gardens. Danger to humans Octopuses generally avoid humans, but incidents have been verified. For example, a Pacific octopus, said to be nearly perfectly camouflaged, "lunged" at a diver and "wrangled" over his camera before it let go. Another diver recorded the encounter on video. All species are venomous, but only blue-ringed octopuses have venom that is lethal to humans. Blue-ringed octopuses are among the deadliest animals in the sea; their bites are reported each year across the animals' range from Australia to the eastern Indo-Pacific Ocean. They bite only when provoked or accidentally stepped upon; bites are small and usually painless. The venom appears to be able to penetrate the skin without a puncture, given prolonged contact. It contains tetrodotoxin, which causes paralysis by blocking the transmission of nerve impulses to the muscles. This causes death by respiratory failure leading to cerebral anoxia. No antidote is known, but if breathing can be kept going artificially, patients recover within 24 hours. Bites have been recorded from captive octopuses of other species; they leave swellings which disappear in a day or two. As a food source Octopus fisheries exist around the world with total catches varying between 245,320 and 322,999 metric tons from 1986 to 1995. The world catch peaked in 2007 at 380,000 tons, and had fallen by a tenth by 2012. Methods to capture octopuses include pots, traps, trawls, snares, drift fishing, spearing, hooking and hand collection. Octopuses have a food conversion efficiency greater than that of chickens, making octopus aquaculture a possibility. Octopuses compete with human fisheries targeting other species, and even rob traps and nets for their catch; they may, themselves, be caught as bycatch if they cannot get away. Octopus is eaten in many cultures, such as those on the Mediterranean and Asian coasts. The arms and other body parts are prepared in ways that vary by species and geography. Live octopuses or their wriggling pieces are consumed as ikizukuri in Japanese cuisine and san-nakji in Korean cuisine. If not prepared properly, however, the severed arms can still choke the diner with their suction cups, causing at least one death in 2010. Animal welfare groups have objected to the live consumption of octopuses on the basis that they can experience pain. In science and technology In classical Greece, Aristotle (384–322 BC) commented on the colour-changing abilities of the octopus, both for camouflage and for signalling, in his Historia animalium: "The octopus ... seeks its prey by so changing its colour as to render it like the colour of the stones adjacent to it; it does so also when alarmed." Aristotle noted that the octopus had a hectocotyl arm and suggested it might be used in sexual reproduction. This claim was widely disbelieved until the 19th century. It was described in 1829 by the French zoologist Georges Cuvier, who supposed it to be a parasitic worm, naming it as a new species, Hectocotylus octopodis. Other zoologists thought it a spermatophore; the German zoologist Heinrich Müller believed it was "designed" to detach during copulation. In 1856, the Danish zoologist Japetus Steenstrup demonstrated that it is used to transfer sperm, and only rarely detaches. Octopuses offer many possibilities in biological research, including their ability to regenerate limbs, change the colour of their skin, behave intelligently with a distributed nervous system, and make use of 168 kinds of protocadherins (humans have 58), the proteins that guide the connections neurons make with each other. The California two-spot octopus has had its genome sequenced, allowing exploration of its molecular adaptations. Having independently evolved mammal-like intelligence, octopuses have been compared by the philosopher Peter Godfrey-Smith, who has studied the nature of intelligence, to hypothetical intelligent extraterrestrials. Their problem-solving skills, along with their mobility and lack of rigid structure enable them to escape from supposedly secure tanks in laboratories and public aquariums. Due to their intelligence, octopuses are listed in some countries as experimental animals on which surgery may not be performed without anesthesia, a protection usually extended only to vertebrates. In the UK from 1993 to 2012, the common octopus (Octopus vulgaris) was the only invertebrate protected under the Animals (Scientific Procedures) Act 1986. In 2012, this legislation was extended to include all cephalopods in accordance with a general EU directive. Some robotics research is exploring biomimicry of octopus features. Octopus arms can move and sense largely autonomously without intervention from the animal's central nervous system. In 2015 a team in Italy built soft-bodied robots able to crawl and swim, requiring only minimal computation. In 2017, a German company made an arm with a soft pneumatically controlled silicone gripper fitted with two rows of suckers. It is able to grasp objects such as a metal tube, a magazine, or a ball, and to fill a glass by pouring water from a bottle.
Biology and health sciences
Mollusks
null
22799
https://en.wikipedia.org/wiki/Oxycodone
Oxycodone
Oxycodone, sold under the brand name Roxicodone and OxyContin (which is the extended-release form) among others, is a semi-synthetic opioid used medically for the treatment of moderate to severe pain. It is highly addictive and is a commonly abused drug. It is usually taken by mouth, and is available in immediate-release and controlled-release formulations. Onset of pain relief typically begins within fifteen minutes and lasts for up to six hours with the immediate-release formulation. In the United Kingdom, it is available by injection. Combination products are also available with paracetamol (acetaminophen), ibuprofen, naloxone, naltrexone, and aspirin. Common side effects include euphoria, constipation, nausea, vomiting, loss of appetite, drowsiness, dizziness, itching, dry mouth, and sweating. Side effects may also include addiction and dependence, substance abuse, irritability, depression or mania, delirium, hallucinations, hypoventilation, gastroparesis, bradycardia, and hypotension. Those allergic to codeine may also be allergic to oxycodone. Use of oxycodone in early pregnancy appears relatively safe. Opioid withdrawal may occur if rapidly stopped. Oxycodone acts by activating the μ-opioid receptor. When taken by mouth, it has roughly 1.5 times the effect of the equivalent amount of morphine. Oxycodone was originally produced from the opium poppy opiate alkaloid thebaine in 1916 in Germany. One year later, it was used medically for the first time in Germany in 1917. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2022, it was the 60th most commonly prescribed medication in the United States, with more than 11million prescriptions. A number of abuse-deterrent formulations are available, such as in combination with naloxone or naltrexone. Medical uses Oxycodone is used for managing moderate to severe acute or chronic pain when other treatments are not sufficient. It may improve quality of life in certain types of pain. Numerous studies have been completed, and the appropriate use of this compound does improve the quality of life of patients with long term chronic pain syndromes. Oxycodone is available as a controlled-release tablet. A 2006 review found that controlled-release oxycodone is comparable to immediate-release oxycodone, morphine, and hydromorphone in management of moderate to severe cancer pain, with fewer side effects than morphine. The author concluded that the controlled-release form is a valid alternative to morphine and a first-line treatment for cancer pain. In 2014, the European Association for Palliative Care recommended oxycodone by mouth as a second-line alternative to morphine by mouth for cancer pain. In children between 11 and 16, the extended-release formulation is FDA-approved for the relief of cancer pain, trauma pain, or pain due to major surgery (for those already treated with opioids, who can tolerate at least 20 mg per day of oxycodone) – this provides an alternative to Duragesic (fentanyl), the only other extended-release opioid analgesic approved for children. Oxycodone, in its extended-release form and/or in combination with naloxone, is sometimes used off-label in the treatment of severe and refractory restless legs syndrome. Available forms Oxycodone is available in a variety of formulations for by mouth or under the tongue: Immediate-release oxycodone (OxyFast, OxyIR, OxyNorm, Roxicodone) Controlled-release oxycodone (OxyContin, Xtampza ER) – 10–12 hour duration Oxycodone tamper-resistant (OxyContin OTR) Immediate-release oxycodone with paracetamol (acetaminophen) (Percocet, Endocet, Roxicet, Tylox) Immediate-release oxycodone with aspirin (Endodan, Oxycodan, Percodan, Roxiprin) Immediate-release oxycodone with ibuprofen (Combunox) Controlled-release oxycodone with naloxone (Targin, Targiniq, Targinact) – 10–12 hour duration Controlled-release oxycodone with naltrexone (Troxyca) – 10–12 hour duration In the US, oxycodone is only approved for use by mouth, available as tablets and oral solutions. Parenteral formulations of oxycodone (brand name OxyNorm) are also available in other parts of the world, however, and are widely used in the European Union. In Spain, the Netherlands and the United Kingdom, oxycodone is approved for intravenous (IV) and intramuscular (IM) use. When first introduced in Germany during World War I, both IV and IM administrations of oxycodone were commonly used for postoperative pain management of Central Powers soldiers. Side effects The most common side effects of oxycodone include reduced sensitivity to pain, delayed gastric emptying, euphoria, anxiolysis (a reduction in anxiety), feelings of relaxation, and respiratory depression. Common side effects of oxycodone include constipation (23%), nausea (23%), vomiting (12%), somnolence (23%), dizziness (13%), itching (13%), dry mouth (6%), and sweating (5%). Less common side effects (experienced by less than 5% of patients) include loss of appetite, nervousness, abdominal pain, diarrhea, urinary retention, dyspnea, and hiccups. Most side effects generally become less intense over time, although issues related to constipation are likely to continue for the duration of use. Chronic use of this compound and associated constipation issues can become very serious, and have been implicated in life-threatening bowel perforations, a number of specific medications including naloxegol have been developed to address opioid induced constipation. Oxycodone in combination with naloxone in managed-release tablets has been formulated to both deter abuse and reduce opioid-induced constipation. Dependence and withdrawal The risk of experiencing severe withdrawal symptoms is high if a patient has become physically dependent and discontinues oxycodone abruptly. Medically, when the drug has been taken regularly over an extended period, it is withdrawn gradually rather than abruptly. People who regularly use oxycodone recreationally or at higher than prescribed doses are at even higher risk of severe withdrawal symptoms. The symptoms of oxycodone withdrawal, as with other opioids, may include "anxiety, panic attack, nausea, insomnia, muscle pain, muscle weakness, fevers, and other flu-like symptoms". Withdrawal symptoms have also been reported in newborns whose mothers had been either injecting or orally taking oxycodone during pregnancy. Hormone levels As with other opioids, chronic use of oxycodone (particularly with higher doses) can often cause concurrent hypogonadism (low sex hormone levels). Overdose In high doses, overdoses, or in some persons not tolerant to opioids, oxycodone can cause shallow breathing, slowed heart rate, cold/clammy skin, pauses in breathing, low blood pressure, constricted pupils, circulatory collapse, respiratory arrest, and death. In 2011, it was the leading cause of drug-related deaths in the U.S. However, from 2012 onwards, heroin and fentanyl have become more common causes of drug-related deaths. Oxycodone overdose has also been described to cause spinal cord infarction in high doses and ischemic damage to the brain, due to prolonged hypoxia from suppressed breathing. Interactions Oxycodone is metabolized by the enzymes CYP3A4 and CYP2D6. Therefore, its clearance can be altered by inhibitors and inducers of these enzymes, increasing and decreasing half-life, respectively. (For lists of CYP3A4 and CYP2D6 inhibitors and inducers, see here and here, respectively.) Natural genetic variation in these enzymes can also influence the clearance of oxycodone, which may be related to the wide inter-individual variability in its half-life and potency. Ritonavir or lopinavir/ritonavir greatly increase plasma concentrations of oxycodone in healthy human volunteers due to inhibition of CYP3A4 and CYP2D6. Rifampicin greatly reduces plasma concentrations of oxycodone due to strong induction of CYP3A4. There is also a case report of fosphenytoin, a CYP3A4 inducer, dramatically reducing the analgesic effects of oxycodone in a chronic pain patient. Dosage or medication adjustments may be necessary in each case. Pharmacology Pharmacodynamics Oxycodone, a semi-synthetic opioid, is a highly selective full agonist of the μ-opioid receptor (MOR). This is the main biological target of the endogenous opioid neuropeptide β-endorphin. Oxycodone has low affinity for the δ-opioid receptor (DOR) and the κ-opioid receptor (KOR), where it is an agonist similarly. After oxycodone binds to the MOR, a G protein-complex is released, which inhibits the release of neurotransmitters by the cell by decreasing the amount of cAMP produced, closing calcium channels, and opening potassium channels. Opioids like oxycodone are thought to produce their analgesic effects via activation of the MOR in the midbrain periaqueductal gray (PAG) and rostral ventromedial medulla (RVM). Conversely, they are thought to produce reward and addiction via activation of the MOR in the mesolimbic reward pathway, including in the ventral tegmental area, nucleus accumbens, and ventral pallidum. Tolerance to the analgesic and rewarding effects of opioids is complex and occurs due to receptor-level tolerance (e.g., MOR downregulation), cellular-level tolerance (e.g., cAMP upregulation), and system-level tolerance (e.g., neural adaptation due to induction of ΔFosB expression). Taken orally, 20 mg of immediate-release oxycodone is considered to be equivalent in analgesic effect to 30 mg of morphine, while extended release oxycodone is considered to be twice as potent as oral morphine. Similarly to most other opioids, oxycodone increases prolactin secretion, but its influence on testosterone levels is unknown. Unlike morphine, oxycodone lacks immunosuppressive activity (measured by natural killer cell activity and interleukin 2 production in vitro); the clinical relevance of this has not been clarified. Active metabolites A few of the metabolites of oxycodone have also been found to be active as MOR agonists, some of which notably have much higher affinity for (as well as higher efficacy at) the MOR in comparison. Oxymorphone possesses 3- to 5-fold higher affinity for the MOR than does oxycodone, while noroxycodone and noroxymorphone possess one-third of and 3-fold higher affinity for the MOR, respectively, and MOR activation is 5- to 10-fold less with noroxycodone but 2-fold higher with noroxymorphone relative to oxycodone. Noroxycodone, noroxymorphone, and oxymorphone also have longer biological half-lives than oxycodone. However, despite the greater in vitro activity of some of its metabolites, it has been determined that oxycodone itself is responsible for 83.0% and 94.8% of its analgesic effect following oral and intravenous administration, respectively. Oxymorphone plays only a minor role, being responsible for 15.8% and 4.5% of the analgesic effect of oxycodone after oral and intravenous administration, respectively. Although the CYP2D6 genotype and the route of administration result in differential rates of oxymorphone formation, the unchanged parent compound remains the major contributor to the overall analgesic effect of oxycodone. In contrast to oxycodone and oxymorphone, noroxycodone and noroxymorphone, while also potent MOR agonists, poorly cross the blood–brain barrier into the central nervous system, and for this reason are only minimally analgesic in comparison. κ-opioid receptor In 1997, a group of Australian researchers proposed (based on a study in rats) that oxycodone acts on KORs, unlike morphine, which acts upon MORs. Further research by this group indicated the drug appears to be a high-affinity κ2b-opioid receptor agonist. However, this conclusion has been disputed, primarily on the basis that oxycodone produces effects that are typical of MOR agonists. In 2006, research by a Japanese group suggested the effect of oxycodone is mediated by different receptors in different situations. Specifically in diabetic mice, the KOR appears to be involved in the antinociceptive effects of oxycodone, while in nondiabetic mice, the μ1-opioid receptor seems to be primarily responsible for these effects. Pharmacokinetics Instant-release absorption profiles and Tmax Oxycodone can be administered orally, intravenously, via intravenous, intramuscular, or subcutaneous injection. Along with rectal, sublingual, buccal or intranasal drug delivery. The bioavailability of oral administration of oxycodone averages within a range of 60 to 87%, with rectal administration yielding the same results; Intranasal administration of oxycodone has a bioavailability of ~77%, the same half life as oral oxycodone, along with faster Tmax previously reported as 47% for nasal spray administration due to the solution in the study exceeding the 0.3- to 0.4-mL nasal mucosa limit. Buccal bioavailability ~55%, Tmax ~60 min. Sublingual bioavailability 20% (non alkalized) ~55% (alkalized) Tmax ~60 minutes. After a dose of conventional (immediate-release) oral oxycodone, the onset of action is 10 to 30 minutes, and peak plasma levels of the drug are attained within roughly 30 to 60 minutes; in contrast, after a dose of OxyContin (an oral controlled-release formulation), peak plasma levels of oxycodone occur in about three hours. The duration of instant-release oxycodone is 3 to 6 hours, although this can be variable depending on the individual. Distribution Oxycodone has a volume of distribution of 2.6L/kg, in the blood it is distributed to skeletal muscle, liver, intestinal tract, lungs, spleen, and brain. At equilibrium the unbound concentration in the brain is threefold higher than the unbound concentration in blood. Conventional oral preparations start to reduce pain within 10 to 15 minutes on an empty stomach; in contrast, OxyContin starts to reduce pain within one hour. Metabolism The metabolism of oxycodone in humans occurs in the liver mainly via the cytochrome P450 system and is extensive (about 95%) and complex, with many minor pathways and resulting metabolites. Around 10% (range 8–14%) of a dose of oxycodone is excreted essentially unchanged (unconjugated or conjugated) in the urine. The major metabolites of oxycodone are noroxycodone (70%), noroxymorphone ("relatively high concentrations"), and oxymorphone (5%). The immediate metabolism of oxycodone in humans is as follows: N-Demethylation to noroxycodone predominantly via CYP3A4 O-Demethylation to oxymorphone predominantly via CYP2D6 6-Ketoreduction to 6α- and 6β-oxycodol N-Oxidation to oxycodone-N-oxide In humans, N-demethylation of oxycodone to noroxycodone by CYP3A4 is the major metabolic pathway, accounting for 45% ± 21% of a dose of oxycodone, while O-demethylation of oxycodone into oxymorphone by CYP2D6 and 6-ketoreduction of oxycodone into 6-oxycodols represent relatively minor metabolic pathways, accounting for 11% ± 6% and 8% ± 6% of a dose of oxycodone, respectively. Several of the immediate metabolites of oxycodone are subsequently conjugated with glucuronic acid and excreted in the urine. 6α-Oxycodol and 6β-oxycodol are further metabolized by N-demethylation to nor-6α-oxycodol and nor-6β-oxycodol, respectively, and by N-oxidation to 6α-oxycodol-N-oxide and 6β-oxycodol-N-oxide (which can subsequently be glucuronidated as well). Oxymorphone is also further metabolized, as follows: 3-Glucuronidation to oxymorphone-3-glucuronide predominantly via UGT2B7 6-Ketoreduction to 6α-oxymorphol and 6β-oxymorphol N-Demethylation to noroxymorphone The first pathway of the above three accounts for 40% of the metabolism of oxymorphone, making oxymorphone-3-glucuronide the main metabolite of oxymorphone, while the latter two pathways account for less than 10% of the metabolism of oxymorphone. After N-demethylation of oxymorphone, noroxymorphone is further glucuronidated to noroxymorphone-3-glucuronide. Because oxycodone is metabolized by the cytochrome P450 system in the liver, its pharmacokinetics can be influenced by genetic polymorphisms and drug interactions concerning this system, as well as by liver function. Some people are fast metabolizers of oxycodone, while others are slow metabolizers, resulting in polymorphism-dependent alterations in relative analgesia and toxicity. While higher CYP2D6 activity increases the effects of oxycodone (owing to increased conversion into oxymorphone), higher CYP3A4 activity has the opposite effect and decreases the effects of oxycodone (owing to increased metabolism into noroxycodone and noroxymorphone). The dose of oxycodone must be reduced in patients with reduced liver function. Elimination The clearance of oxycodone is 0.8 L/min. Oxycodone and its metabolites are mainly excreted in urine. Therefore, oxycodone accumulates in patients with kidney impairment. Oxycodone is eliminated in the urine 10% as unchanged oxycodone, 45% ± 21% as N-demethylated metabolites (noroxycodone, noroxymorphone, noroxycodols), 11 ± 6% as O-demethylated metabolites (oxymorphone, oxymorphols), and 8% ± 6% as 6-keto-reduced metabolites (oxycodols). Duration of action Oral oxycodone has a half-life of 4.5 hours. It is available as a generic medication. The manufacturer of OxyContin, a controlled-release preparation of oxycodone, Purdue Pharma, claimed in their 1992 patent application that the duration of action of OxyContin is 12 hours in "90% of patients". It has never performed any clinical studies in which OxyContin was given at more frequent intervals. In a separate filing, Purdue claims that controlled-release oxycodone "provides pain relief in said patient for at least 12 hours after administration". However, in 2016 an investigation by the Los Angeles Times found that "the drug wears off hours early in many people", inducing symptoms of opiate withdrawal and intense cravings for OxyContin. One doctor, Lawrence Robbins, told journalists that over 70% of his patients would report that OxyContin would only provide 4–7 hours of relief. Doctors in the 1990s often would switch their patients to a dosing schedule of once every eight hours when they complained that the duration of action for OxyContin was too short to be taken only twice a day. Mean serum concentration of controlled-release oxycodone peaks at 78 ng/ml at 1 hour and drops to 20 ng/ml at 8 hours and under 10 ng/ml at 12 hours. Purdue strongly discouraged the practice: Purdue's medical director Robert Reder wrote to one doctor in 1995 that "OxyContin has been developed for [12-hour] dosing...I request that you not use a [8-hourly] dosing regimen." Purdue repeatedly released memos to its sales representatives ordering them to remind doctors not to deviate from a 12-hour dosing schedule. One such memo read, "There is no Q8 dosing with OxyContin... [8-hour dosing] needs to be nipped in the bud. NOW!!" The journalists who covered the investigation argued that Purdue Pharma has insisted on a 12-hour duration of action for nearly all patients, despite evidence to the contrary, to protect the reputation of OxyContin as a 12-hour drug and the willingness of health insurance and managed care companies to cover OxyContin despite its high cost relative to generic opiates such as morphine. Purdue sales representatives were instructed to encourage doctors to write prescriptions for larger 12-hour doses instead of more frequent dosing. An August 1996 memo to Purdue sales representatives in Tennessee entitled "$$$$$$$$$$$$$ It's Bonus Time in the Neighborhood!" reminded the representatives that their commissions would dramatically increase if they were successful in convincing doctors to prescribe larger doses. Los Angeles Times journalists argue using interviews from opioid addiction experts that such high doses of OxyContin spaced 12 hours apart create a combination of agony during opiate withdrawal (lower lows) and a schedule of reinforcement that relieves this agony fostering addiction. As of 2024, the prescribing information for OxyContin still specifies a 12-hour dosing schedule as the only option; it also states, "there are no well-controlled clinical studies evaluating the safety and efficacy with dosing more frequently than every 12 hours." Chemistry Oxycodone's chemical name is derived from codeine. The chemical structures are very similar, differing only in that Oxycodone has a hydroxy group at carbon-14 (codeine has just a hydrogen in its place) Oxycodone has a 7,8-dihydro feature. Codeine has a double bond between those two carbons; and Oxycodone has a carbonyl group (as in ketones) in place of the hydroxyl group of codeine. It is also similar to hydrocodone, differing only in that it has a hydroxyl group at carbon-14. Biosynthesis In terms of biosynthesis, oxycodone has been found naturally in nectar extracts from the orchid family Epipactis helleborine; together along with another opioid: 3-{2-{3-{3-benzyloxypropyl}-3-indol, 7,8-didehydro- 4,5-epoxy-3,6-d-morphinan. Thodey et al., 2014 introduce a microbial compound manufacturing system for compounds including oxycodone. The Thodey platform produces both natural and semisynthetic opioids including this one. This system uses Saccharomyces cerevisiae with transgenes from Papaver somniferum (the opium poppy) and Pseudomonas putida to turn a thebaine input into other opiates and opioids. Detection in biological fluids Oxycodone and/or its major metabolites may be measured in blood or urine to monitor for clearance, non-medical use, confirm a diagnosis of poisoning, or assist in a medicolegal death investigation. Many commercial opiate screening tests cross-react appreciably with oxycodone and its metabolites, but chromatographic techniques can easily distinguish oxycodone from other opiates. History Martin Freund and (Jakob) Edmund Speyer of the University of Frankfurt in Germany published the first synthesis of oxycodone from thebaine in 1916. When Freund died, in 1920, Speyer wrote his obituary for the German Chemical Society. Speyer, born to a Jewish family in Frankfurt am Main in 1878, became a victim of the Holocaust. He died on 5 May 1942, the second day of deportations from the Lodz Ghetto; his death was noted in the ghetto's chronicle. The first clinical use of the drug was documented in 1917, the year after it was first developed. It was first introduced to the U.S. market in May 1939. In early 1928, Merck introduced a combination product containing scopolamine, oxycodone, and ephedrine under the German initials for the ingredients SEE, which was later renamed Scophedal (SCOpolamine, ePHEDrine, and eukodAL) in 1942. It was last manufactured in 1987 but can be compounded. This combination is essentially an oxycodone analogue of the morphine-based "twilight sleep", with ephedrine added to reduce circulatory and respiratory effects. The drug became known as the "Miracle Drug of the 1930s" in Continental Europe and elsewhere and it was the Wehrmacht's choice for a battlefield analgesic for a time. The drug was expressly designed to provide what the patent application and package insert referred to as "very deep analgesia and profound and intense euphoria" as well as tranquillisation and anterograde amnesia useful for surgery and battlefield wounding cases. Oxycodone was allegedly chosen over other common opiates for this product because it had been shown to produce less sedation at equianalgesic doses compared to morphine, hydromorphone (Dilaudid), and hydrocodone (Dicodid). During Operation Himmler, Skophedal was also reportedly injected in massive overdose into the prisoners dressed in Polish Army uniforms in the staged incident on 1 September 1939 which opened the Second World War. The personal notes of Adolf Hitler's physician, Theodor Morell, indicate Hitler received repeated injections of "Eukodal" (oxycodone; produced by Merck) and Scophedal, as well as Dolantin (pethidine) codeine, and morphine less frequently; oxycodone could not be obtained after late January 1945. In the United States, the Controlled Substances Act (CSA) was passed by the United States Congress and signed into law by President Richard Nixon on 27 October 1970. The passing of the CSA resulted in all products containing oxycodone being classified as a Schedule II controlled substance. Purdue Pharma, a privately held company based in Stamford, Connecticut, developed the prescription painkiller OxyContin. It was approved by the FDA in 1995 after no long-term studies and no assessment of its addictive capabilities. David Kessler, FDA commissioner at the time, later said of the approval of OxyContin: "No doubt it was a mistake. It was certainly one of the worst medical mistakes, a major mistake." Upon its release in 1995, OxyContin was hailed as a medical breakthrough, a long-lasting narcotic that could help patients with moderate to severe pain. The drug became a blockbuster and has reportedly generated some US$35 billion in revenue for Purdue. Opioid epidemic Oxycodone, like other opioid analgesics, tends to induce feelings of euphoria, relaxation, and reduced anxiety in those who are occasional users. These effects make it one of the most commonly abused pharmaceutical drugs in the United States. The abuse of Oxycodone, as well as related opioids more broadly, is not unique to the United States and is a common drug of abuse globally. United States Oxycodone is the most widely recreationally used opioid in America. In the United States, more than 12 million people use opioid drugs recreationally. The U.S. Department of Health and Human Services estimates that about 11 million people in the U.S. consume oxycodone in a non-medical way annually. Opioids were responsible for 49,000 of the 72,000 drug overdose deaths in the U.S. in 2017. In 2007, about 42,800 emergency room visits occurred due to "episodes" involving oxycodone. In 2008, recreational use of oxycodone and hydrocodone was involved in 14,800 deaths. Some of the cases were due to overdoses of the acetaminophen component, resulting in fatal liver damage. In September 2013, the FDA released new labeling guidelines for long-acting and extended-release opioids requiring manufacturers to remove moderate pain as an indication for use, instead stating the drug is for "pain severe enough to require daily, around-the-clock, long term opioid treatment". The updated labeling will not restrict physicians from prescribing opioids for moderate pain, as needed. Reformulated OxyContin is causing some recreational users to change to heroin, which is cheaper and easier to obtain. Lawsuits In October 2017, The New Yorker published a story on Mortimer Sackler and Purdue Pharma regarding their ties to the production and manipulation of the oxycodone markets. The article links Raymond and Arthur Sackler's business practices with the rise of direct pharmaceutical marketing and eventually to the rise of addiction to oxycodone in the United States. The article implies that the Sackler family bears some responsibility for the opioid epidemic in the United States. In 2019, The New York Times ran a piece confirming that Richard Sackler, the son of Raymond Sackler, told company officials in 2008 to "measure our performance by Rx's by strength, giving higher measures to higher strengths". This was verified with documents tied to a lawsuit – which was filed by the Massachusetts attorney general, Maura Healey – claiming that Purdue Pharma and members of the Sackler family knew that high doses of OxyContin over long periods would increase the risk of serious side effects, including addiction. Despite Purdue Pharma's proposal for a US$12 billion settlement of the lawsuit, the attorneys general of 23 states, including Massachusetts, rejected the settlement offer in September 2019. Australia The non-medical use of oxycodone existed since the early 1970s, but by 2015, 91% of a national sample of injecting drug users in Australia had reported using oxycodone, and 27% had injected it in the last six months. Canada Opioid-related deaths in Ontario had increased by 242% from 1969 to 2014. By 2009 in Ontario there were more deaths from oxycodone overdoses than from cocaine overdoses. Deaths from opioid pain relievers had increased from 13.7 deaths per million residents in 1991 to 27.2 deaths per million residents in 2004. The non-medical use of oxycodone in Canada became a problem. Areas where oxycodone is most problematic are Atlantic Canada and Ontario, where its non-medical use is prevalent in rural towns and in many smaller to medium-sized cities. Oxycodone is also widely available across Western Canada, but methamphetamine and heroin are more serious problems in larger cities, while oxycodone is more common in rural towns. Oxycodone is diverted through doctor shopping, prescription forgery, pharmacy theft, and overprescription. The recent formulations of oxycodone, particularly Purdue Pharma's crush-, chew-, injection- and dissolve-resistant OxyNEO which replaced the banned OxyContin product in Canada in early 2012, have led to a decline in the recreational use of this opiate but have increased the recreational use of the more potent drug fentanyl. According to a Canadian Centre on Substance Abuse study quoted in Maclean's magazine, there were at least 655 fentanyl-related deaths in Canada in five years. In Alberta, the Blood Tribe police claimed that from the fall of 2014 through January 2015, oxycodone pills or a lethal fake variation referred to as Oxy 80s containing fentanyl made in illegal labs by members of organized crime were responsible for ten deaths on the Blood Reserve, which is located southwest of Lethbridge, Alberta. Province-wide, approximately 120 Albertans died from fentanyl-related overdoses in 2014. United Kingdom Prescriptions of Oxycodone rose in Scotland by 430% between 2002 and 2008, prompting fears of usage problems that would mirror those of the United States. The first known death due to overdose in the UK occurred in 2002. Preventive measures In August 2010, Purdue Pharma reformulated their long-acting oxycodone line, marketed as OxyContin, using a polymer, Intac, to make the pills more difficult to crush or dissolve in water to reduce non-medical use of OxyContin. Inactive ingredients/excipients are butylated hydroxytoluene (BHT), hypromellose, polyethylene glycol 400, polyethylene oxide, magnesium stearate, and titanium dioxide. The FDA approved relabeling the reformulated version as abuse-resistant in April 2013. Pfizer manufactures a preparation of short-acting oxycodone, marketed as Oxecta, which contains inactive ingredients, referred to as tamper-resistant Aversion Technology. Approved by the FDA in the U.S. in June 2011, the new formulation, while not being able to deter oral recreational use, makes crushing, chewing, snorting, or injecting the opioid impractical because of a change in its chemical properties. Legal status Oxycodone is subject to international conventions on narcotic drugs. In addition, oxycodone is subject to national laws that differ by country. The 1931 Convention for Limiting the Manufacture and Regulating the Distribution of Narcotic Drugs of the League of Nations included oxycodone. The 1961 Single Convention on Narcotic Drugs of the United Nations, which replaced the 1931 convention, categorized oxycodone in Schedule I. Global restrictions on Schedule I drugs include "limit[ing] exclusively to medical and scientific purposes the production, manufacture, export, import, distribution of, trade in, use and possession of" these drugs; "requir[ing] medical prescriptions for the supply or dispensation of [these] drugs to individuals"; and "prevent[ing] the accumulation" of quantities of these drugs "in excess of those required for the normal conduct of business". Australia Oxycodone is in Schedule I (derived from the Single Convention on Narcotic Drugs) of the Commonwealth's Narcotic Drugs Act 1967. In addition, it is in Schedule 8 of the Australian Standard for the Uniform Scheduling of Drugs and Poisons ("Poisons Standard"), meaning it is a "controlled drug... which should be available for use but require[s] restriction of manufacture, supply, distribution, possession and use to reduce abuse, misuse and physical or psychological dependence". Canada Oxycodone is a controlled substance under Schedule I of the Controlled Drugs and Substances Act (CDSA). In February 2012, Ontario passed legislation to allow the expansion of an already existing drug-tracking system for publicly funded drugs to include those that are privately insured. This database will function to identify and monitor patient's attempts to seek prescriptions from multiple doctors or retrieve them from multiple pharmacies. Other provinces have proposed similar legislation, while some, such as Nova Scotia, have legislation already in effect for monitoring prescription drug use. These changes have coincided with other changes in Ontario's legislation to target the misuse of painkillers and high addiction rates to drugs such as oxycodone. As of 29 February 2012, Ontario passed legislation delisting oxycodone from the province's public drug benefit program. This was a first for any province to delist a drug based on addictive properties. The new law prohibits prescriptions for OxyNeo except to certain patients under the Exceptional Access Program including palliative care and in other extenuating circumstances. Patients already prescribed oxycodone will receive coverage for an additional year for OxyNeo, and after that, it will be disallowed unless designated under the exceptional access program. Much of the legislative activity has stemmed from Purdue Pharma's decision in 2011 to begin a modification of Oxycontin's composition to make it more difficult to crush for snorting or injecting. The new formulation, OxyNeo, is intended to be preventive in this regard and retain its effectiveness as a painkiller. Since introducing its Narcotics Safety and Awareness Act, Ontario has committed to focusing on drug addiction, particularly in the monitoring and identification of problem opioid prescriptions, as well as the education of patients, doctors, and pharmacists. This Act, introduced in 2010, commits to the establishment of a unified database to fulfil this intention. Both the public and medical community have received the legislation positively, though concerns about the ramifications of legal changes have been expressed. Because laws are largely provincially regulated, many speculate a national strategy is needed to prevent smuggling across provincial borders from jurisdictions with looser restrictions. In 2015, Purdue Pharma's abuse-resistant OxyNEO and six generic versions of OxyContin had been on the Canada-wide approved list for prescriptions since 2012. In June 2015, then-federal Minister of Health Rona Ambrose announced that within three years, all oxycodone products sold in Canada would need to be tamper-resistant. Some experts warned that the generic product manufacturers may not have the technology to achieve that goal, possibly giving Purdue Pharma a monopoly on this opiate. Several class-action suits across Canada have been launched against the Purdue group of companies and affiliates. Claimants argue the pharmaceutical manufacturers did not meet a standard of care and were negligent in doing so. These lawsuits reference earlier judgments in the United States, which held that Purdue was liable for wrongful marketing practices and misbranding. Since 2007, the Purdue companies have paid over CAN$650 million in settling litigation or facing criminal fines. Germany The drug is in Appendix III of the Narcotics Act (Betäubungsmittelgesetz or BtMG). The law allows only physicians, dentists, and veterinarians to prescribe oxycodone and the federal government to regulate the prescriptions (e.g., by requiring reporting). Hong Kong Oxycodone is regulated under Part I of Schedule 1 of Hong Kong's Chapter 134 Dangerous Drugs Ordinance. Japan Oxycodone is a restricted drug in Japan. Its import and export are strictly restricted to specially designated organizations having a prior permit to import it. In a high-profile case an American who was a top Toyota executive living in Tokyo, who claimed to be unaware of the law, was arrested for importing oxycodone into Japan. Singapore Oxycodone is listed as a Class A drug in the Misuse of Drugs Act of Singapore, which means offences concerning the drug attract the most severe level of punishment. A conviction for unauthorized manufacture of the drug attracts a minimum sentence of 10 years of imprisonment and corporal punishment of 5 strokes of the cane, and a maximum sentence of life imprisonment or 30 years of imprisonment and 15 strokes of the cane. The minimum and maximum penalties for unauthorized trafficking in the drug are respectively 5 years of imprisonment and 5 strokes of the cane, and 20 years of imprisonment and 15 strokes of the cane. United Kingdom Oxycodone is a Class A drug under the Misuse of Drugs Act 1971. For Class A drugs, which are "considered to be the most likely to cause harm", possession without a prescription is punishable by up to seven years in prison, an unlimited fine, or both. Dealing of the drug illegally is punishable by up to life imprisonment, an unlimited fine, or both. Oxycodone is a Schedule 2 drug under the Misuse of Drugs Regulations 2001 which "provide certain exemptions from the provisions of the Misuse of Drugs Act 1971". United States Under the Controlled Substances Act, oxycodone is a Schedule II controlled substance whether by itself or part of a multi-ingredient medication. The Drug Enforcement Administration (DEA) lists oxycodone both for sale and for use in manufacturing other opioids as ACSCN 9143 and in 2013 approved the following annual aggregate manufacturing quotas: 131.5 metric tons for sale, down from 153.75 in 2012, and 10.25 metric tons for conversion, unchanged from the previous year. In 2020, oxycodone possession was decriminalized in the U.S. state of Oregon. Economics The International Narcotics Control Board estimated of oxycodone were manufactured worldwide in 1998; by 2007 this figure had grown to . United States accounted for 82% of consumption in 2007 at . Canada, Germany, Australia, and France combined accounted for 13% of consumption in 2007. In 2010, of oxycodone were illegally manufactured using a fake pill imprint. This accounted for 0.8% of consumption. These illicit tablets were later seized by the U.S. Drug Enforcement Administration, according to the International Narcotics Control Board. The board also reported manufactured in 2010. This number had decreased from a record high of in 2009. Names Expanded expressions for the compound oxycodone in the academic literature include "dihydrohydroxycodeinone", "Eucodal", "Eukodal", "14-hydroxydihydrocodeinone", and "Nucodan". In a UNESCO convention, the translations of "oxycodone" are oxycodon (Dutch), oxycodone (French), oxicodona (Spanish), (Arabic), (Chinese), and (Russian). The word "oxycodone" should not be confused with "oxandrolone", "oxazepam", "oxybutynin", "oxytocin", or "Roxanol". Other brand names include Longtec and Shortec.
Biology and health sciences
Pain treatments
Health
22804
https://en.wikipedia.org/wiki/Operational%20amplifier
Operational amplifier
An operational amplifier (often op amp or opamp) is a DC-coupled electronic voltage amplifier with a differential input, a (usually) single-ended output, and an extremely high gain. Its name comes from its original use of performing mathematical operations in analog computers. By using negative feedback, an op amp circuit's characteristics (e.g. its gain, input and output impedance, bandwidth, and functionality) can be determined by external components and have little dependence on temperature coefficients or engineering tolerance in the op amp itself. This flexibility has made the op amp a popular building block in analog circuits. Today, op amps are used widely in consumer, industrial, and scientific electronics. Many standard integrated circuit op amps cost only a few cents; however, some integrated or hybrid operational amplifiers with special performance specifications may cost over . Op amps may be packaged as components or used as elements of more complex integrated circuits. The op amp is one type of differential amplifier. Other differential amplifier types include the fully differential amplifier (an op amp with a differential rather than single-ended output), the instrumentation amplifier (usually built from three op amps), the isolation amplifier (with galvanic isolation between input and output), and negative-feedback amplifier (usually built from one or more op amps and a resistive feedback network). Operation The amplifier's differential inputs consist of a non-inverting input (+) with voltage V+ and an inverting input (−) with voltage V−; ideally the op amp amplifies only the difference in voltage between the two, which is called the differential input voltage. The output voltage of the op amp Vout is given by the equation where AOL is the open-loop gain of the amplifier (the term "open-loop" refers to the absence of an external feedback loop from the output to the input). Open-loop amplifier The magnitude of AOL is typically very large (100,000 or more for integrated circuit op amps, corresponding to +100 dB). Thus, even small microvolts of difference between V+ and V− may drive the amplifier into clipping or saturation. The magnitude of AOL is not well controlled by the manufacturing process, and so it is impractical to use an open-loop amplifier as a stand-alone differential amplifier. Without negative feedback, and optionally positive feedback for regeneration, an open-loop op amp acts as a comparator, although comparator ICs are better suited. If the inverting input is held at ground (0 V), and the input voltage Vin applied to the non-inverting input is positive, the output will be maximum positive; if Vin is negative, the output will be maximum negative. Closed-loop amplifier If predictable operation is desired, negative feedback is used, by applying a portion of the output voltage to the inverting input. The closed-loop feedback greatly reduces the gain of the circuit. When negative feedback is used, the circuit's overall gain and response is determined primarily by the feedback network, rather than by the op-amp characteristics. If the feedback network is made of components with values small relative to the op amp's input impedance, the value of the op amp's open-loop response AOL does not seriously affect the circuit's performance. In this context, high input impedance at the input terminals and low output impedance at the output terminal(s) are particularly useful features of an op amp. The response of the op-amp circuit with its input, output, and feedback circuits to an input is characterized mathematically by a transfer function; designing an op-amp circuit to have a desired transfer function is in the realm of electrical engineering. The transfer functions are important in most applications of op amps, such as in analog computers. In the non-inverting amplifier on the right, the presence of negative feedback via the voltage divider Rf, Rg determines the closed-loop gain ACL = . Equilibrium will be established when Vout is just sufficient to pull the inverting input to the same voltage as Vin. The voltage gain of the entire circuit is thus . As a simple example, if Vin = 1 V and Rf = Rg, Vout will be 2 V, exactly the amount required to keep V− at 1 V. Because of the feedback provided by the Rf, Rg network, this is a closed-loop circuit. Another way to analyze this circuit proceeds by making the following (usually valid) assumptions: When an op amp operates in linear (i.e., not saturated) mode, the difference in voltage between the non-inverting (+) and inverting (−) pins is negligibly small. The input impedance of the (+) and (−) pins is much larger than other resistances in the circuit. The input signal Vin appears at both (+) and (−) pins per assumption 1, resulting in a current i through Rg equal to : Since Kirchhoff's current law states that the same current must leave a node as enter it, and since the impedance into the (−) pin is near infinity per assumption 2, we can assume practically all of the same current i flows through Rf, creating an output voltage By combining terms, we determine the closed-loop gain ACL: Op-amp characteristics Ideal op amps An ideal op amp is usually considered to have the following characteristics: Infinite open-loop gain G = vout / vin Infinite input impedance Rin, and so zero input current Zero input offset voltage Infinite output voltage range Infinite bandwidth with zero phase shift and infinite slew rate Zero output impedance Rout, and so infinite output current range Zero noise Infinite common-mode rejection ratio (CMRR) Infinite power supply rejection ratio. These ideals can be summarized by the two : In a closed loop the output does whatever is necessary to make the voltage difference between the inputs zero. The inputs draw zero current. The first rule only applies in the usual case where the op amp is used in a closed-loop design (negative feedback, where there is a signal path of some sort feeding back from the output to the inverting input). These rules are commonly used as a good first approximation for analyzing or designing op-amp circuits. None of these ideals can be perfectly realized. A real op amp may be modeled with non-infinite or non-zero parameters using equivalent resistors and capacitors in the op-amp model. The designer can then include these effects into the overall performance of the final circuit. Some parameters may turn out to have negligible effect on the final design while others represent actual limitations of the final performance. Real op amps Real op amps differ from the ideal model in various aspects. Finite gain Open-loop gain is finite in real operational amplifiers. Typical devices exhibit open-loop DC gain exceeding 100,000. So long as the loop gain (i.e., the product of open-loop and feedback gains) is very large, the closed-loop gain will be determined entirely by the amount of negative feedback (i.e., it will be independent of open-loop gain). In applications where the closed-loop gain must be very high (approaching the open-loop gain), the feedback gain will be very low and the lower loop gain in these cases causes non-ideal behavior from the circuit. Non-zero output impedance Low output impedance is important for low-impedance loads; for these loads, the voltage drop across the output impedance effectively reduces the open-loop gain. In configurations with a voltage-sensing negative feedback, the output impedance of the amplifier is effectively lowered; thus, in linear applications, op-amp circuits usually exhibit a very low output impedance. Low-impedance outputs typically require high quiescent (i.e., idle) current in the output stage and will dissipate more power, so low-power designs may purposely sacrifice low output impedance. Finite input impedances The differential input impedance of the operational amplifier is defined as the impedance between its two inputs; the common-mode input impedance is the impedance from each input to ground. MOSFET-input operational amplifiers often have protection circuits that effectively short circuit any input differences greater than a small threshold, so the input impedance can appear to be very low in some tests. However, as long as these operational amplifiers are used in a typical high-gain negative feedback application, these protection circuits will be inactive. The input bias and leakage currents described below are a more important design parameter for typical operational amplifier applications. Input capacitance Additional input impedance due to parasitic capacitance can be a critical issue for high-frequency operation where it reduces input impedance and may cause phase shifts. Input current Due to biasing requirements or leakage, a small amount of current flows into the inputs. When high resistances or sources with high output impedances are used in the circuit, these small currents can produce significant voltage drops. If the input currents are matched, and the impedance looking out of both inputs are matched, then those voltages at each input will be equal. Because the operational amplifier operates on the difference between its inputs, these matched voltages will have no effect. It is more common for the input currents to be slightly mismatched. The difference is called input offset current, and even with matched resistances a small offset voltage (distinct from the input offset voltage below) can be produced. This offset voltage can create offsets or drifting in the operational amplifier. Input offset voltage Input offset voltage is a voltage required across the op amp's input terminals to drive the output voltage to zero. In the perfect amplifier, there would be no input offset voltage. However, it exists because of imperfections in the differential amplifier input stage of op amps. Input offset voltage creates two problems: First, due to the amplifier's high voltage gain, it virtually assures that the amplifier output will go into saturation if it is operated without negative feedback, even when the input terminals are wired together. Second, in a closed loop, negative feedback configuration, the input offset voltage is amplified along with the signal and this may pose a problem if high precision DC amplification is required or if the input signal is very small. Common-mode gain A perfect operational amplifier amplifies only the voltage difference between its two inputs, completely rejecting all voltages that are common to both. However, the differential input stage of an operational amplifier is never perfect, leading to the amplification of these common voltages to some degree. The standard measure of this defect is called the common-mode rejection ratio (CMRR). Minimization of common-mode gain is important in non-inverting amplifiers that operate at high gain. Power-supply rejection The output of a perfect operational amplifier will be independent of power supply voltage fluctuations. Every real operational amplifier has a finite power supply rejection ratio (PSRR) that reflects how well the op amp can reject noise in its power supply from propagating to the output. With increasing frequency the power-supply rejection usually gets worse. Temperature effects Performance and properties of the amplifier typically changes, to some extent, with changes in temperature. Temperature drift of the input offset voltage is especially important. Drift Real op-amp parameters are subject to slow change over time and with changes in temperature, input conditions, etc. Finite bandwidth All amplifiers have finite bandwidth. To a first approximation, the op amp has the frequency response of an integrator with gain. That is, the gain of a typical op amp is inversely proportional to frequency and is characterized by its gain–bandwidth product (GBWP). For example, an op amp with a GBWP of 1 MHz would have a gain of 5 at 200 kHz, and a gain of 1 at 1 MHz. This dynamic response coupled with the very high DC gain of the op amp gives it the characteristics of a first-order low-pass filter with very high DC gain and low cutoff frequency given by the GBWP divided by the DC gain.The finite bandwidth of an op amp can be the source of several problems, including:Typical low-cost, general-purpose op amps exhibit a GBWP of a few megahertz. Specialty and high-speed op amps exist that can achieve a GBWP of hundreds of megahertz. For very high-frequency circuits, a current-feedback operational amplifier is often used. Noise Amplifiers intrinsically output noise, even when there is no signal applied. This can be due to internal thermal noise and flicker noise of the device. For applications with high gain or high bandwidth, noise becomes an important consideration and a low-noise amplifier, which is specifically designed for minimum intrinsic noise, may be required to meet performance requirements. Non-linear imperfections Saturation Output voltage is limited to a minimum and maximum value close to the power supply voltages. The output of older op amps can reach to within one or two volts of the supply rails. The output of so-called op amps can reach to within millivolts of the supply rails when providing low output currents. Slew rate limiting The amplifier's output voltage reaches its maximum rate of change, the slew rate, usually specified in volts per microsecond (V/μs). When slew rate limiting occurs, further increases in the input signal have no effect on the rate of change of the output. Slew rate limiting is usually caused by the input stage saturating; the result is a constant current driving a capacitance in the amplifier (especially those capacitances used to implement its frequency compensation); the slew rate is limited by . Slewing is associated with the large-signal performance of an op amp. Consider, for example, an op amp configured for a gain of 10. Let the input be a 1V, 100 kHz sawtooth wave. That is, the amplitude is 1V and the period is 10 microseconds. Accordingly, the rate of change (i.e., the slope) of the input is 0.1 V per microsecond. After 10× amplification, the output should be a 10V, 100 kHz sawtooth, with a corresponding slew rate of 1V per microsecond. However, the classic 741 op amp has a 0.5V per microsecond slew rate specification so that its output can rise to no more than 5V in the sawtooth's 10-microsecond period. Thus, if one were to measure the output, it would be a 5V, 100 kHz sawtooth, rather than a 10V, 100 kHz sawtooth.Next consider the same amplifier and 100 kHz sawtooth, but now the input amplitude is 100mV rather than 1V. After 10× amplification the output is a 1V, 100 kHz sawtooth with a corresponding slew rate of 0.1V per microsecond. In this instance, the 741 with its 0.5V per microsecond slew rate will amplify the input properly. Modern high-speed op amps can have slew rates in excess of 5,000V per microsecond. However, it is more common for op amps to have slew rates in the range 5–100V per microsecond. For example, the general purpose TL081 op amp has a slew rate of 13V per microsecond. As a general rule, low power and small bandwidth op amps have low slew rates. As an example, the LT1494 micropower op amp consumes 1.5 microamp but has a 2.7 kHz gain-bandwidth product and a 0.001V per microsecond slew rate. Non-linear input-output relationship The output voltage may not be accurately proportional to the difference between the input voltages producing distortion. This effect will be very small in a practical circuit where substantial negative feedback is used. Phase reversal In some integrated op amps, when the published common mode voltage is violated (e.g., by one of the inputs being driven to one of the supply voltages), the output may slew to the opposite polarity from what is expected in normal operation. Under such conditions, negative feedback becomes positive, likely causing the circuit to lock up in that state. Power considerations Limited output current The output current must be finite. In practice, most op amps are designed to limit the output current to prevent damage to the device, typically around 25 mA for a type 741 IC op amp. Modern designs are electronically more robust than earlier implementations and some can sustain direct short circuits on their outputs without damage. Limited output voltage Output voltage cannot exceed the power supply voltage supplied to the op amp. The maximum output of most op amps is further reduced by some amount due to limitations in the output circuitry. Rail-to-rail op amps are designed for maximum output levels. Output sink current The output sink current is the maximum current allowed to sink into the output stage. Some manufacturers provide an output voltage vs. the output sink current plot which gives an idea of the output voltage when it is sinking current from another source into the output pin. Limited dissipated power The output current flows through the op amp's internal output impedance, generating heat that must be dissipated. If the op amp dissipates too much power, then its temperature will increase above some safe limit. The op amp must shut down or risk being damaged. Modern integrated FET or MOSFET op amps approximate more closely the ideal op amp than bipolar ICs when it comes to input impedance and input bias currents. Bipolars are generally better when it comes to input voltage offset, and often have lower noise. Generally, at room temperature, with a fairly large signal, and limited bandwidth, FET and MOSFET op amps now offer better performance. Internal circuitry of -type op amp Sourced by many manufacturers, and in multiple similar products, an example of a bipolar transistor operational amplifier is the 741 integrated circuit designed in 1968 by David Fullagar at Fairchild Semiconductor after Bob Widlar's LM301 integrated circuit design. In this discussion, we use the parameters of the hybrid-pi model to characterize the small-signal, grounded emitter characteristics of a transistor. In this model, the current gain of a transistor is denoted hfe, more commonly called the β. Architecture A small-scale integrated circuit, the 741 op amp shares with most op amps an internal structure consisting of three gain stages: Differential amplifier (outlined dark blue) — provides high differential amplification (gain), with rejection of common-mode signal, low noise, high input impedance, and drives a Voltage amplifier (outlined magenta) — provides high voltage gain, a single-pole frequency roll-off, and in turn drives the Output amplifier (outlined cyan and green) — provides high current gain (low output impedance), along with output current limiting, and output short-circuit protection. Additionally, it contains current mirror (outlined red) bias circuitry and compensation capacitor (30 pF). Differential amplifier The input stage consists of a cascaded differential amplifier (outlined in dark blue) followed by a current-mirror active load. This constitutes a transconductance amplifier, turning a differential voltage signal at the bases of Q1, Q2 into a current signal into the base of Q15. It entails two cascaded transistor pairs, satisfying conflicting requirements. The first stage consists of the matched NPN emitter follower pair Q1, Q2 that provide high input impedance. The second is the matched PNP common-base pair Q3, Q4 that eliminates the undesirable Miller effect; it drives an active load Q7 plus matched pair Q5, Q6. That active load is implemented as a modified Wilson current mirror; its role is to convert the (differential) input current signal to a single-ended signal without the attendant 50% losses (increasing the op amp's open-loop gain by 3 dB). Thus, a small-signal differential current in Q3 versus Q4 appears summed (doubled) at the base of Q15, the input of the voltage gain stage. Voltage amplifier The (class-A) voltage gain stage (outlined in magenta) consists of the two NPN transistors Q15 and Q19 connected in a Darlington configuration and uses the output side of current mirror formed by Q12 and Q13 as its collector (dynamic) load to achieve its high voltage gain. The output sink transistor Q20 receives its base drive from the common collectors of Q15 and Q19; the level-shifter Q16 provides base drive for the output source transistor Q14. The transistor Q22 prevents this stage from delivering excessive current to Q20 and thus limits the output sink current. Output amplifier The output stage (Q14, Q20, outlined in cyan) is a Class AB amplifier. It provides an output drive with impedance of ~50Ω, in essence, current gain. Transistor Q16 (outlined in green) provides the quiescent current for the output transistors and Q17 limits output source current. Biasing circuits Biasing circuits provide appropriate quiescent current for each stage of the op amp. The resistor (39 kΩ) connecting the (diode-connected) Q11 and Q12, and the given supply voltage (VS+ − VS−), determine the current in the current mirrors, (matched pairs) Q10/Q11 and Q12/Q13. The collector current of Q11, i11 × 39 kΩ = VS+ − VS− − 2 VBE. For the typical VS = ±20 V, the standing current in Q11 and Q12 (as well as in Q13) would be ~1 mA. A supply current for a typical 741 of about 2 mA agrees with the notion that these two bias currents dominate the quiescent supply current. Transistors Q11 and Q10 form a Widlar current mirror, with quiescent current in Q10 i10 such that ln(i11 / i10) = i10 × 5 kΩ / 28 mV, where 5 kΩ represents the emitter resistor of Q10, and 28 mV is VT, the thermal voltage at room temperature. In this case i10 ≈ 20 μA. Differential amplifier The biasing circuit of this stage is set by a feedback loop that forces the collector currents of Q10 and Q9 to (nearly) match. Any small difference in these currents provides drive for the common base of Q3 and Q4. The summed quiescent currents through Q1 and Q3 plus Q2 and Q4 is mirrored from Q8 into Q9, where it is summed with the collector current in Q10, the result being applied to the bases of Q3 and Q4. The quiescent currents through Q1 and Q3 (also Q2 and Q4) i1 will thus be half of i10, of order ~10 μA. Input bias current for the base of Q1 (also Q2) will amount to i1 / β; typically ~50 nA, implying a current gain hfe ≈ 200 for Q1 (also Q2). This feedback circuit tends to draw the common base node of Q3/Q4 to a voltage Vcom − 2 VBE, where Vcom is the input common-mode voltage. At the same time, the magnitude of the quiescent current is relatively insensitive to the characteristics of the components Q1–Q4, such as hfe, that would otherwise cause temperature dependence or part-to-part variations. Transistor Q7 drives Q5 and Q6 into conduction until their (equal) collector currents match that of Q1/Q3 and Q2/Q4. The quiescent current in Q7 is VBE / 50 kΩ, about 35 μA, as is the quiescent current in Q15, with its matching operating point. Thus, the quiescent currents are pairwise matched in Q1/Q2, Q3/Q4, Q5/Q6, and Q7/Q15. Voltage amplifier Quiescent currents in Q16 and Q19 are set by the current mirror Q12/Q13, which is running at ~1 mA. The collector current in Q19 tracks that standing current. Output amplifier In the circuit involving Q16 (variously named rubber diode or VBE multiplier), the 4.5 kΩ resistor must be conducting about 100 μA, with Q16 VBE roughly 700 mV. Then VCB must be about 0.45 V and VCE at about 1.0 V. Because the Q16 collector is driven by a current source and the Q16 emitter drives into the Q19 collector current sink, the Q16 transistor establishes a voltage difference between the Q14 base and the Q20 base of ~1 V, regardless of the common-mode voltage of Q14/Q20 bases. The standing current in Q14/Q20 will be a factor exp(100 mV mm/ VT) ≈ 36 smaller than the 1 mA quiescent current in the class A portion of the op amp. This (small) standing current in the output transistors establishes the output stage in class AB operation and reduces the crossover distortion of this stage. Small-signal differential mode A small differential input voltage signal gives rise, through multiple stages of current amplification, to a much larger voltage signal on output. Input impedance The input stage with Q1 and Q3 is similar to an emitter-coupled pair (long-tailed pair), with Q2 and Q4 adding some degenerating impedance. The input impedance is relatively high because of the small current through Q1-Q4. A typical 741 op amp has a differential input impedance of about 2 MΩ. The common mode input impedance is even higher, as the input stage works at an essentially constant current. Differential amplifier A differential voltage Vin at the op amp inputs (pins 3 and 2, respectively) gives rise to a small differential current in the bases of Q1 and Q2 iin ≈ Vin / (2hiehfe). This differential base current causes a change in the differential collector current in each leg by iinhfe. Introducing the transconductance of Q1, gm = hfe / hie, the (small-signal) current at the base of Q15 (the input of the voltage gain stage) is Vingm / 2. This portion of the op amp cleverly changes a differential signal at the op amp inputs to a single-ended signal at the base of Q15, and in a way that avoids wastefully discarding the signal in either leg. To see how, notice that a small negative change in voltage at the inverting input (Q2 base) drives it out of conduction, and this incremental decrease in current passes directly from Q4 collector to its emitter, resulting in a decrease in base drive for Q15. On the other hand, a small positive change in voltage at the non-inverting input (Q1 base) drives this transistor into conduction, reflected in an increase in current at the collector of Q3. This current drives Q7 further into conduction, which turns on current mirror Q5/Q6. Thus, the increase in Q3 emitter current is mirrored in an increase in Q6 collector current; the increased collector currents shunts more from the collector node and results in a decrease in base drive current for Q15. Besides avoiding wasting 3 dB of gain here, this technique decreases common-mode gain and feedthrough of power supply noise. Voltage amplifier A current signal i at Q15's base gives rise to a current in Q19 of order iβ2 (the product of the hfe of each of Q15 and Q19, which are connected in a Darlington pair). This current signal develops a voltage at the bases of output transistors Q14 and Q20 proportional to the hie of the respective transistor. Output amplifier Output transistors Q14 and Q20 are each configured as an emitter follower, so no voltage gain occurs there; instead, this stage provides current gain, equal to the hfe of Q14 and Q20. The current gain lowers the output impedance and although the output impedance is not zero, as it would be in an ideal op amp, with negative feedback it approaches zero at low frequencies. Other linear characteristics Overall open-loop gain The net open-loop small-signal voltage gain of the op amp is determined by the product of the current gain hfe of some 4 transistors. In practice, the voltage gain for a typical 741-style op amp is of order 200,000, and the current gain, the ratio of input impedance (~2−6MΩ) to output impedance (~50Ω) provides yet more (power) gain. Small-signal common mode gain The ideal op amp has infinite common-mode rejection ratio, or zero common-mode gain. In the present circuit, if the input voltages change in the same direction, the negative feedback makes Q3/Q4 base voltage follow (with 2 VBE below) the input voltage variations. Now the output part (Q10) of Q10-Q11 current mirror keeps up the common current through Q9/Q8 constant in spite of varying voltage. Q3/Q4 collector currents, and accordingly the output current at the base of Q15, remain unchanged. In the typical 741 op amp, the common-mode rejection ratio is 90 dB, implying an open-loop common-mode voltage gain of about 6. Frequency compensation The innovation of the Fairchild μA741 was the introduction of frequency compensation via an on-chip (monolithic) capacitor, simplifying application of the op amp by eliminating the need for external components for this function. The 30 pF capacitor stabilizes the amplifier via Miller compensation and functions in a manner similar to an op-amp integrator circuit. Also known as dominant pole compensation because it introduces a pole that masks (dominates) the effects of other poles into the open loop frequency response; in a 741 op amp this pole can be as low as 10 Hz (where it causes a −3 dB loss of open loop voltage gain). This internal compensation is provided to achieve unconditional stability of the amplifier in negative feedback configurations where the feedback network is non-reactive and the loop gain is unity or higher. In contrast, amplifiers requiring external compensation, such as the μA748, may require external compensation or closed-loop gains significantly higher than unity. Input offset voltage The offset null pins may be used to place external resistors (typically in the form of the two ends of a potentiometer, with the slider connected to VS–) in parallel with the emitter resistors of Q5 and Q6, to adjust the balance of the Q5/Q6 current mirror. The potentiometer is adjusted such that the output is null (midrange) when the inputs are shorted together. Non-linear characteristics Input breakdown voltage The transistors Q3, Q4 help to increase the reverse VBE rating; The base-emitter junctions of the NPN transistors Q1 and Q2 break down at around 7V, but the PNP transistors Q3 and Q4 have VBE breakdown voltages around 50V. Output-stage voltage swing and current limiting Variations in the quiescent current with temperature, or due to manufacturing variations, are common, so crossover distortion may be subject to significant variation. The output range of the amplifier is about one volt less than the supply voltage, owing in part to VBE of the output transistors Q14 and Q20. The resistor at the Q14 emitter, along with Q17, limits Q14 current to about ; otherwise, Q17 conducts no current. Current limiting for Q20 is performed in the voltage gain stage: Q22 senses the voltage across Q19's emitter resistor (); as it turns on, it diminishes the drive current to Q15 base. Later versions of this amplifier schematic may show a somewhat different method of output current limiting. Applicability considerations While the 741 was historically used in audio and other sensitive equipment, such use is now rare because of the improved noise performance of more modern op amps. Apart from generating noticeable hiss, 741s and other older op amps may have poor common-mode rejection ratios and so will often introduce cable-borne mains hum and other common-mode interference, such as switch clicks, into sensitive equipment. The 741 has come to often mean a generic op-amp IC (such as μA741, LM301, 558, LM324, TBA221 — or a more modern replacement such as the TL071). The description of the 741 output stage is qualitatively similar for many other designs (that may have quite different input stages), except: Some devices (μA748, LM301, LM308) are not internally compensated (require an external capacitor from output to some point within the operational amplifier, if used in low closed-loop gain applications). Some modern devices have rail-to-rail output capability, meaning that the output can range from within a few millivolts of the positive supply voltage to within a few millivolts of the negative supply voltage. Classification Op amps may be classified by their construction: discrete, built from individual transistors or tubes/valves, hybrid, consisting of discrete and integrated components, full integrated circuits — most common, having displaced the former two due to low cost. IC op amps may be classified in many ways, including: Device grade, including acceptable operating temperature ranges and other environmental or quality factors. For example: LM101, LM201, and LM301 refer to the military, industrial, and commercial versions of the same component. Military and industrial-grade components offer better performance in harsh conditions than their commercial counterparts but are sold at higher prices. Classification by package type may also affect environmental hardiness, as well as manufacturing options; DIP, and other through-hole packages are tending to be replaced by surface-mount devices. Classification by internal compensation: op amps may suffer from high frequency instability in some negative feedback circuits unless a small compensation capacitor modifies the phase and frequency responses. Op amps with a built-in capacitor are termed compensated, and allow circuits above some specified closed-loop gain to be stable with no external capacitor. In particular, op amps that are stable even with a closed loop gain of 1 are called unity gain compensated. Single, dual and quad versions of many commercial op-amp IC are available, meaning 1, 2 or 4 operational amplifiers are included in the same package. Rail-to-rail input (and/or output) op amps can work with input (and/or output) signals very close to the power supply rails. CMOS op amps (such as the CA3140E) provide extremely high input resistances, higher than JFET-input op amps, which are normally higher than bipolar-input op amps. Programmable op amps allow the quiescent current, bandwidth and so on to be adjusted by an external resistor. Manufacturers often market their op amps according to purpose, such as low-noise pre-amplifiers, wide bandwidth amplifiers, and so on. Applications Use in electronics system design The use of op amps as circuit blocks is much easier and clearer than specifying all their individual circuit elements (transistors, resistors, etc.), whether the amplifiers used are integrated or discrete circuits. In the first approximation op amps can be used as if they were ideal differential gain blocks; at a later stage, limits can be placed on the acceptable range of parameters for each op amp. Circuit design follows the same lines for all electronic circuits. A specification is drawn up governing what the circuit is required to do, with allowable limits. For example, the gain may be required to be 100 times, with a tolerance of 5% but drift of less than 1% in a specified temperature range; the input impedance not less than one megohm; etc. A basic circuit is designed, often with the help of electronic circuit simulation. Specific commercially available op amps and other components are then chosen that meet the design criteria within the specified tolerances at acceptable cost. If not all criteria can be met, the specification may need to be modified. A prototype is then built and tested; additional changes to meet or improve the specification, alter functionality, or reduce the cost, may be made. Applications without feedback Without feedback, the op amp may be used as a voltage comparator. Note that a device designed primarily as a comparator may be better if, for instance, speed is important or a wide range of input voltages may be found since such devices can quickly recover from full-on or full-off saturated states. A voltage level detector can be obtained if a reference voltage Vref is applied to one of the op amp's inputs. This means that the op amp is set up as a comparator to detect a positive voltage. If the voltage to be sensed, Ei, is applied to op amp's (+) input, the result is a noninverting positive-level detector: when Ei is above Vref, VO equals +Vsat; when Ei is below Vref, VO equals −Vsat. If Ei is applied to the inverting input, the circuit is an inverting positive-level detector: When Ei is above Vref, VO equals −Vsat. A zero voltage level detector (Ei = 0) can convert, for example, the output of a sine-wave from a function generator into a variable-frequency square wave. If Ei is a sine wave, triangular wave, or wave of any other shape that is symmetrical around zero, the zero-crossing detector's output will be square. Zero-crossing detection may also be useful in triggering TRIACs at the best time to reduce mains interference and current spikes. Positive-feedback applications Another typical configuration of op amps is with positive feedback, which takes a fraction of the output signal back to the non-inverting input. An important application of positive feedback is the comparator with hysteresis, the Schmitt trigger. Some circuits may use positive feedback and negative feedback around the same amplifier, for example triangle-wave oscillators and active filters. Negative-feedback applications Non-inverting amplifier In a non-inverting amplifier, the output voltage changes in the same direction as the input voltage. The gain equation for the op amp is However, in this circuit V− is a function of Vout because of the negative feedback through the R1 R2 network. R1 and R2 form a voltage divider, and as V− is a high-impedance input, it does not load it appreciably. Consequently where Substituting this into the gain equation, we obtain Solving for : If is very large, this simplifies to The non-inverting input of the operational amplifier needs a path for DC to ground; if the signal source does not supply a DC path, or if that source requires a given load impedance, then the circuit will require another resistor from the non-inverting input to ground. When the operational amplifier's input bias currents are significant, then the DC source resistances driving the inputs should be balanced. The ideal value for the feedback resistors (to give minimal offset voltage) will be such that the two resistances in parallel roughly equal the resistance to ground at the non-inverting input pin. That ideal value assumes the bias currents are well matched, which may not be true for all op amps. Inverting amplifier In an inverting amplifier, the output voltage changes in an opposite direction to the input voltage. As with the non-inverting amplifier, we start with the gain equation of the op amp: This time, V− is a function of both Vout and Vin due to the voltage divider formed by Rf and Rin. Again, the op-amp input does not apply an appreciable load, so Substituting this into the gain equation and solving for : If is very large, this simplifies to A resistor is often inserted between the non-inverting input and ground (so both inputs see similar resistances), reducing the input offset voltage due to different voltage drops due to bias current, and may reduce distortion in some op amps. A DC-blocking capacitor may be inserted in series with the input resistor when a frequency response down to DC is not needed and any DC voltage on the input is unwanted. That is, the capacitive component of the input impedance inserts a DC zero and a low-frequency pole that gives the circuit a bandpass or high-pass characteristic. The potentials at the operational amplifier inputs remain virtually constant (near ground) in the inverting configuration. The constant operating potential typically results in distortion levels that are lower than those attainable with the non-inverting topology. Other applications audio- and video-frequency pre-amplifiers and buffers differential amplifiers differentiators and integrators filters precision rectifiers precision peak detectors voltage and current regulators analog calculators analog-to-digital converters digital-to-analog converters voltage clamping oscillators and waveform generators clipper clamper (dc inserter or restorer) LOG and ANTILOG amplifiers Most single, dual and quad op amps available have a standardized pin-out which permits one type to be substituted for another without wiring changes. A specific op amp may be chosen for its open loop gain, bandwidth, noise performance, input impedance, power consumption, or a compromise between any of these factors. Historical timeline 1941: A vacuum tube op amp. An op amp, defined as a general-purpose, DC-coupled, high gain, inverting feedback amplifier, is first found in "Summing Amplifier" filed by Karl D. Swartzel Jr. of Bell Labs in 1941. This design used three vacuum tubes to achieve a gain of and operated on voltage rails of . It had a single inverting input rather than differential inverting and non-inverting inputs, as are common in today's op amps. Throughout World War II, Swartzel's design proved its value by being liberally used in the M9 artillery director designed at Bell Labs. This artillery director worked with the SCR584 radar system to achieve extraordinary hit rates (near 90%) that would not have been possible otherwise. 1947: An op amp with an explicit non-inverting input. In 1947, the operational amplifier was first formally defined and named in a paper by John R. Ragazzini of Columbia University. In this same paper a footnote mentioned an op-amp design by a student that would turn out to be quite significant. This op amp, designed by Loebe Julie, was superior in a variety of ways. It had two major innovations. Its input stage used a long-tailed triode pair with loads matched to reduce drift in the output and, far more importantly, it was the first op-amp design to have two inputs (one inverting, the other non-inverting). The differential input made a whole range of new functionality possible, but it would not be used for a long time due to the rise of the chopper-stabilized amplifier. 1949: A chopper-stabilized op amp. In 1949, Edwin A. Goldberg designed a chopper-stabilized op amp. This set-up uses a normal op amp with an additional AC amplifier that goes alongside the op amp. The chopper gets an AC signal from DC by switching between the DC voltage and ground at a fast rate (60 Hz or 400 Hz). This signal is then amplified, rectified, filtered and fed into the op amp's non-inverting input. This vastly improved the gain of the op amp while significantly reducing the output drift and DC offset. Unfortunately, any design that used a chopper couldn't use their non-inverting input for any other purpose. Nevertheless, the much improved characteristics of the chopper-stabilized op amp made it the dominant way to use op amps. Techniques that used the non-inverting input regularly would not be very popular until the 1960s when op-amp ICs started to show up in the field. 1953: A commercially available op amp. In 1953, vacuum tube op amps became commercially available with the release of the model K2-W from George A. Philbrick Researches, Incorporated. The designation on the devices shown, GAP/R, is an acronym for the complete company name. Two nine-pin 12AX7 vacuum tubes were mounted in an octal package and had a model K2-P chopper add-on available that would effectively "use up" the non-inverting input. This op amp was based on a descendant of Loebe Julie's 1947 design and, along with its successors, would start the widespread use of op amps in industry. 1961: A discrete IC op amp. With the birth of the transistor in 1947, and the silicon transistor in 1954, the concept of ICs became a reality. The introduction of the planar process in 1959 made transistors and ICs stable enough to be commercially useful. By 1961, solid-state, discrete op amps were being produced. These op amps were effectively small circuit boards with packages such as edge connectors. They usually had hand-selected resistors in order to improve things such as voltage offset and drift. The P45 (1961) had a gain of 94 dB and ran on ±15 V rails. It was intended to deal with signals in the range of . 1961: A varactor bridge op amp. There have been many different directions taken in op-amp design. Varactor bridge op amps started to be produced in the early 1960s. They were designed to have extremely small input current and are still amongst the best op amps available in terms of common-mode rejection with the ability to correctly deal with hundreds of volts at their inputs. 1962: An op amp in a potted module. By 1962, several companies were producing modular potted packages that could be plugged into printed circuit boards. These packages were crucially important as they made the operational amplifier into a single black box which could be easily treated as a component in a larger circuit. 1963: A monolithic IC op amp. In 1963, the first monolithic IC op amp, the μA702 designed by Bob Widlar at Fairchild Semiconductor, was released. Monolithic ICs consist of a single chip as opposed to a chip and discrete parts (a discrete IC) or multiple chips bonded and connected on a circuit board (a hybrid IC). Almost all modern op amps are monolithic ICs; however, this first IC did not meet with much success. Issues such as an uneven supply voltage, low gain and a small dynamic range held off the dominance of monolithic op amps until 1965 when the μA709 (also designed by Bob Widlar) was released. 1968: Release of the μA741. The popularity of monolithic op amps was further improved upon the release of the LM101 in 1967, which solved a variety of issues, and the subsequent release of the μA741 in 1968. The μA741 was extremely similar to the LM101 except that Fairchild's facilities allowed them to include a 30 pF compensation capacitor inside the chip instead of requiring external compensation. This simple difference has made the 741 the canonical op amp and many modern amps base their pinout on the 741s. The μA741 is still in production, and has become ubiquitous in electronics—many manufacturers produce a version of this classic chip, recognizable by part numbers containing 741. The same part is manufactured by several companies. 1970: First high-speed, low-input current FET design. In the 1970s high speed, low-input current designs started to be made by using FETs. These would be largely replaced by op amps made with MOSFETs in the 1980s. 1972: Single sided supply op amps being produced. A single sided supply op amp is one where the input and output voltages can be as low as the negative power supply voltage instead of needing to be at least two volts above it. The result is that it can operate in many applications with the negative supply pin on the op amp being connected to the signal ground, thus eliminating the need for a separate negative power supply. The LM324 (released in 1972) was one such op amp that came in a quad package (four separate op amps in one package) and became an industry standard. In addition to packaging multiple op amps in a single package, the 1970s also saw the birth of op amps in hybrid packages. These op amps were generally improved versions of existing monolithic op amps. As the properties of monolithic op amps improved, the more complex hybrid ICs were quickly relegated to systems that are required to have extremely long service lives or other specialty systems. Recent trends. Recently supply voltages in analog circuits have decreased (as they have in digital logic) and low-voltage op amps have been introduced reflecting this. Supplies of 5 V and increasingly 3.3 V (sometimes as low as 1.8 V) are common. To maximize the signal range modern op amps commonly have rail-to-rail output (the output signal can range from the lowest supply voltage to the highest) and sometimes rail-to-rail inputs.
Technology
Components
null
22818
https://en.wikipedia.org/wiki/Olympus%20Mons
Olympus Mons
Olympus Mons (; ) is a large shield volcano on Mars. It is over high as measured by the Mars Orbiter Laser Altimeter (MOLA), about 2.5 times the elevation of Mount Everest above sea level. It is Mars's tallest volcano, its tallest planetary mountain, and is approximately tied with Rheasilvia on Vesta as the tallest mountain currently discovered in the Solar System. It is associated with the volcanic region of Tharsis Montes. It last erupted 25 million years ago. Olympus Mons is the youngest of the large volcanoes on Mars, having formed during the Martian Hesperian Period with eruptions continuing well into the Amazonian Period. It has been known to astronomers since the late 19th century as the albedo feature Nix Olympica (Latin for "Olympic Snow"), and its mountainous nature was suspected well before space probes confirmed it as a mountain. Two impact craters on Olympus Mons have been assigned provisional names by the International Astronomical Union: the Karzok crater and the Pangboche crater. They are two of several suspected source areas for shergottites, the most abundant class of Martian meteorites. Description As a shield volcano, Olympus Mons resembles the shape of the large volcanoes making up the Hawaiian Islands. The edifice is about wide. Because the mountain is so large, with complex structure at its edges, allocating a height to it is difficult. Olympus Mons stands above the Mars global datum, and its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over (a little over twice the height of Mauna Kea as measured from its base on the ocean floor). The total elevation change from the plains of Amazonis Planitia, over to the northwest, to the summit approaches . The summit of the mountain has six nested calderas (collapsed craters) forming an irregular depression × across and up to deep. The volcano's outer edge consists of an escarpment, or cliff, up to tall (although obscured by lava flows in places), a feature unique among the shield volcanoes of Mars, which may have been created by enormous flank landslides. Olympus Mons covers an area of about , which is approximately the size of Italy or the Philippines, and it is supported by a thick lithosphere. The extraordinary size of Olympus Mons is likely because Mars lacks mobile tectonic plates. Unlike on Earth, the crust of Mars remains fixed over a stationary hotspot, and a volcano can continue to discharge lava until it reaches an enormous height. Being a shield volcano, Olympus Mons has a very gently sloping profile. The average slope on the volcano's flanks is only 5%. Slopes are steepest near the middle part of the flanks and grow shallower toward the base, giving the flanks a concave upward profile. Its flanks are shallower and extend farther from the summit in the northwestern direction than they do to the southeast. The volcano's shape and profile have been likened to a "circus tent" held up by a single pole that is shifted off center. Due to the size and shallow slopes of Olympus Mons, an observer standing on the Martian surface would be unable to view the entire profile of the volcano, even from a great distance. The curvature of the planet and the volcano itself would obscure such a synoptic view. Similarly, an observer near the summit would be unaware of standing on a very high mountain, as the slope of the volcano would extend far beyond the horizon, a mere 3 kilometers away. The typical atmospheric pressure at the top of Olympus Mons is 72 pascals, about 12% of the average Martian surface pressure of 600 pascals. Both are exceedingly low by terrestrial standards; by comparison, the atmospheric pressure at the summit of Mount Everest is 32,000 pascals, or about 32% of Earth's sea level pressure. Even so, high-altitude orographic clouds frequently drift over the Olympus Mons summit, and airborne Martian dust is still present. Although the average Martian surface atmospheric pressure is less than one percent of Earth's, the much lower gravity of Mars increases the atmosphere's scale height; in other words, Mars's atmosphere is expansive and does not drop off in density with height as sharply as Earth's. The composition of Olympus Mons is approximately 44% silicates, 17.5% iron oxides (which give the planet its red coloration), 7% aluminium, 6% magnesium, 6% calcium, and particularly high proportions of sulfur dioxide with 7%. These results point to the surface being largely composed of basalts and other mafic rocks, which would have erupted as low viscosity lava flows and hence lead to the low gradients on the surface of the planet. Geology Olympus Mons is the result of many thousands of highly fluid, basaltic lava flows that poured from volcanic vents over a long period of time (the Hawaiian Islands exemplify similar shield volcanoes on a smaller scale – see Mauna Kea). Like the basalt volcanoes on Earth, Martian basaltic volcanoes are capable of erupting enormous quantities of ash. Due to the reduced gravity of Mars compared to Earth, there are lesser buoyant forces on the magma rising out of the crust. In addition, the magma chambers are thought to be much larger and deeper than the ones found on Earth. The flanks of Olympus Mons are made up of innumerable lava flows and channels. Many of the flows have levees along their margins (pictured). The cooler, outer margins of the flow solidify, leaving a central trough of molten, flowing lava. Partially collapsed lava tubes are visible as chains of pit craters, and broad lava fans formed by lava emerging from intact, subsurface tubes are also common. In places along the volcano's base, solidified lava flows can be seen spilling out into the surrounding plains, forming broad aprons, and burying the basal escarpment. Crater counts from high-resolution images taken by the Mars Express orbiter in 2004 indicate that lava flows on the northwestern flank of Olympus Mons range in age from 115 million years old (Mya) to only 2 Mya. These ages are very recent in geological terms, suggesting that the mountain may still be volcanically active, though in a very quiescent and episodic fashion. The caldera complex at the peak of the volcano is made of at least six overlapping calderas and caldera segments (pictured). Calderas are formed by roof collapse following depletion and withdrawal of the subsurface magma chamber after an eruption. Each caldera thus represents a separate pulse of volcanic activity on the mountain. The largest and oldest caldera segment appears to have formed as a single, large lava lake. Using geometric relationships of caldera dimensions from laboratory models, scientists have estimated that the magma chamber associated with the largest caldera on Olympus Mons lies at a depth of about below the caldera floor. Crater size-frequency distributions on the caldera floors indicate the calderas range in age from 350 Mya to about 150 Mya. All probably formed within 100 million years of each other. It is possible that the magma chambers within Olympus Mons received new magma from the mantle after the caldera floors formed, leading to the inflation of each chamber and uplift of parts of the volcano summit. Olympus Mons is structurally and topographically asymmetrical. The longer, more shallow northwestern flank displays extensional features, such as large slumps and normal faults. In contrast, the volcano's steeper southeastern side has features indicating compression, including step-like terraces in the volcano's mid-flank region (interpreted as thrust faults) and a number of wrinkle ridges located at the basal escarpment. Why opposite sides of the mountain should show different styles of deformation may lie in how large shield volcanoes grow laterally and in how variations within the volcanic substrate have affected the mountain's final shape. Large shield volcanoes grow not only by adding material to their flanks as erupted lava, but also by spreading laterally at their bases. As a volcano grows in size, the stress field underneath the volcano changes from compressional to extensional. A subterranean rift may develop at the base of the volcano, causing the underlying crust to spread apart. If the volcano rests on sediments containing mechanically weak layers (e.g., beds of water-saturated clay), detachment zones (décollements) may develop in the weak layers. The extensional stresses in the detachment zones can produce giant landslides and normal faults on the volcano's flanks, leading to the formation of a basal escarpment. Further from the volcano, these detachment zones can express themselves as a succession of overlapping, gravity driven thrust faults. This mechanism has long been cited as an explanation of the Olympus Mons aureole deposits (discussed below). Olympus Mons lies at the edge of the Tharsis bulge, an ancient vast volcanic plateau likely formed by the end of the Noachian Period. During the Hesperian, when Olympus Mons began to form, the volcano was located on a shallow slope that descended from the high in Tharsis into the northern lowland basins. Over time, these basins received large volumes of sediment eroded from Tharsis and the southern highlands. The sediments likely contained abundant Noachian-aged phyllosilicates (clays) formed during an early period on Mars when surface water was abundant, and were thickest in the northwest where basin depth was greatest. As the volcano grew through lateral spreading, low-friction detachment zones preferentially developed in the thicker sediment layers to the northwest, creating the basal escarpment and widespread lobes of aureole material (Lycus Sulci). Spreading also occurred to the southeast; however, it was more constrained in that direction by the Tharsis rise, which presented a higher-friction zone at the volcano's base. Friction was higher in that direction because the sediments were thinner and probably consisted of coarser grained material resistant to sliding. The competent and rugged basement rocks of Tharsis acted as an additional source of friction. This inhibition of southeasterly basal spreading in Olympus Mons could account for the structural and topographic asymmetry of the mountain. Numerical models of particle dynamics involving lateral differences in friction along the base of Olympus Mons have been shown to reproduce the volcano's present shape and asymmetry fairly well. It has been speculated that the detachment along the weak layers was aided by the presence of high-pressure water in the sediment pore spaces, which would have interesting astrobiological implications. If water-saturated zones still exist in sediments under the volcano, they would likely have been kept warm by a high geothermal gradient and residual heat from the volcano's magma chamber. Potential springs or seeps around the volcano would offer many possibilities for detecting microbial life. Early observations and naming Olympus Mons and a few other volcanoes in the Tharsis region stand high enough to reach above the frequent Martian dust-storms recorded by telescopic observers as early as the 19th century. The astronomer Patrick Moore pointed out that Schiaparelli (1835–1910) "had found that his Nodus Gordis and Olympic Snow [Nix Olympica] were almost the only features to be seen" during dust storms, and "guessed correctly that they must be high". The Mariner 9 spacecraft arrived in orbit around Mars in 1971 during a global dust-storm. The first objects to become visible as the dust began to settle, the tops of the Tharsis volcanoes, demonstrated that the altitude of these features greatly exceeded that of any mountain found on Earth, as astronomers expected. Observations of the planet from Mariner 9 confirmed that Nix Olympica was a volcano. Ultimately, astronomers adopted the name Olympus Mons for the albedo feature known as Nix Olympica. Regional setting and surrounding features Olympus Mons is located between the northwestern edge of the Tharsis region and the eastern edge of Amazonis Planitia. It stands about from the other three large Martian shield volcanoes, collectively called the Tharsis Montes (Arsia Mons, Pavonis Mons, and Ascraeus Mons). The Tharsis Montes are slightly smaller than Olympus Mons. A wide, annular depression or moat about deep surrounds the base of Olympus Mons and is thought to be due to the volcano's immense weight pressing down on the Martian crust. The depth of this depression is greater on the northwest side of the mountain than on the southeast side. Olympus Mons is partially surrounded by a region of distinctive grooved or corrugated terrain known as the Olympus Mons aureole. The aureole consists of several large lobes. Northwest of the volcano, the aureole extends a distance of up to and is known as Lycus Sulci (). East of Olympus Mons, the aureole is partially covered by lava flows, but where it is exposed it goes by different names (Gigas Sulci, for example). The origin of the aureole remains debated, but it was likely formed by huge landslides or gravity-driven thrust sheets that sloughed off the edges of the Olympus Mons shield. Interactive Mars map
Physical sciences
Solar System
Astronomy