id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
5,905 | https://en.wikipedia.org/wiki/Chalcogen | |-
! colspan=2 style="text-align:left;" | ↓ Period
|-
| 2
|
|-
! 3
|
|-
! 4
|
|-
! 5
|
|-
! 6
|
|-
! 7
|
|-
| colspan="2"|
Legend
|}
The chalcogens (ore forming) ( ) are the chemical elements in group 16 of the periodic table. This group is also known as the oxygen family. Group 16 consists of the elements oxygen (O), sulfur (S), selenium (Se), tellurium (Te), and the radioactive elements polonium (Po) and livermorium (Lv). Often, oxygen is treated separately from the other chalcogens, sometimes even excluded from the scope of the term "chalcogen" altogether, due to its very different chemical behavior from sulfur, selenium, tellurium, and polonium. The word "chalcogen" is derived from a combination of the Greek word () principally meaning copper (the term was also used for bronze, brass, any metal in the poetic sense, ore and coin), and the Latinized Greek word , meaning born or produced.
Sulfur has been known since antiquity, and oxygen was recognized as an element in the 18th century. Selenium, tellurium and polonium were discovered in the 19th century, and livermorium in 2000. All of the chalcogens have six valence electrons, leaving them two electrons short of a full outer shell. Their most common oxidation states are −2, +2, +4, and +6. They have relatively low atomic radii, especially the lighter ones.
All of the naturally occurring chalcogens have some role in biological functions, either as a nutrient or a toxin. Selenium is an important nutrient (among others as a building block of selenocysteine) but is also commonly toxic. Tellurium often has unpleasant effects (although some organisms can use it), and polonium (especially the isotope polonium-210) is always harmful as a result of its radioactivity.
Sulfur has more than 20 allotropes, oxygen has nine, selenium has at least eight, polonium has two, and only one crystal structure of tellurium has so far been discovered. There are numerous organic chalcogen compounds. Not counting oxygen, organic sulfur compounds are generally the most common, followed by organic selenium compounds and organic tellurium compounds. This trend also occurs with chalcogen pnictides and compounds containing chalcogens and carbon group elements.
Oxygen is generally obtained by separation of air into nitrogen and oxygen. Sulfur is extracted from oil and natural gas. Selenium and tellurium are produced as byproducts of copper refining. Polonium is most available in naturally occurring actinide-containing materials. Livermorium has been synthesized in particle accelerators. The primary use of elemental oxygen is in steelmaking. Sulfur is mostly converted into sulfuric acid, which is heavily used in the chemical industry. Selenium's most common application is glassmaking. Tellurium compounds are mostly used in optical disks, electronic devices, and solar cells. Some of polonium's applications are due to its radioactivity.
Properties
Atomic and physical
Chalcogens show similar patterns in electron configuration, especially in the outermost shells, where they all have the same number of valence electrons, resulting in similar trends in chemical behavior:
All chalcogens have six valence electrons. All of the solid, stable chalcogens are soft and do not conduct heat well. Electronegativity decreases towards the chalcogens with higher atomic numbers. Density, melting and boiling points, and atomic and ionic radii tend to increase towards the chalcogens with higher atomic numbers.
Isotopes
Out of the six known chalcogens, one (oxygen) has an atomic number equal to a nuclear magic number, which means that their atomic nuclei tend to have increased stability towards radioactive decay. Oxygen has three stable isotopes, and 14 unstable ones. Sulfur has four stable isotopes, 20 radioactive ones, and one isomer. Selenium has six observationally stable or nearly stable isotopes, 26 radioactive isotopes, and 9 isomers. Tellurium has eight stable or nearly stable isotopes, 31 unstable ones, and 17 isomers. Polonium has 42 isotopes, none of which are stable. It has an additional 28 isomers. In addition to the stable isotopes, some radioactive chalcogen isotopes occur in nature, either because they are decay products, such as 210Po, because they are primordial, such as 82Se, because of cosmic ray spallation, or via nuclear fission of uranium. Livermorium isotopes 288Lv through 293Lv have been discovered; the most stable livermorium isotope is 293Lv, which has a half-life of 0.061 seconds.
With the exception of livermorium, all chalcogens have at least one naturally occurring radioisotope: oxygen has trace 15O, sulfur has trace 35S, selenium has 82Se, tellurium has 128Te and 130Te, and polonium has 210Po.
Among the lighter chalcogens (oxygen and sulfur), the most neutron-poor isotopes undergo proton emission, the moderately neutron-poor isotopes undergo electron capture or β+ decay, the moderately neutron-rich isotopes undergo β− decay, and the most neutron rich isotopes undergo neutron emission. The middle chalcogens (selenium and tellurium) have similar decay tendencies as the lighter chalcogens, but no proton-emitting isotopes have been observed, and some of the most neutron-deficient isotopes of tellurium undergo alpha decay. Polonium isotopes tend to decay via alpha or beta decay. Isotopes with nonzero nuclear spins are more abundant in nature among the chalcogens selenium and tellurium than they are with sulfur.
Allotropes
Oxygen's most common allotrope is diatomic oxygen, or O2, a reactive paramagnetic molecule that is ubiquitous to aerobic organisms and has a blue color in its liquid state. Another allotrope is O3, or ozone, which is three oxygen atoms bonded together in a bent formation. There is also an allotrope called tetraoxygen, or O4, and six allotropes of solid oxygen including "red oxygen", which has the formula O8.
Sulfur has over 20 known allotropes, which is more than any other element except carbon. The most common allotropes are in the form of eight-atom rings, but other molecular allotropes that contain as few as two atoms or as many as 20 are known. Other notable sulfur allotropes include rhombic sulfur and monoclinic sulfur. Rhombic sulfur is the more stable of the two allotropes. Monoclinic sulfur takes the form of long needles and is formed when liquid sulfur is cooled to slightly below its melting point. The atoms in liquid sulfur are generally in the form of long chains, but above 190 °C, the chains begin to break down. If liquid sulfur above 190 °C is frozen very rapidly, the resulting sulfur is amorphous or "plastic" sulfur. Gaseous sulfur is a mixture of diatomic sulfur (S2) and 8-atom rings.
Selenium has at least eight distinct allotropes. The gray allotrope, commonly referred to as the "metallic" allotrope, despite not being a metal, is stable and has a hexagonal crystal structure. The gray allotrope of selenium is soft, with a Mohs hardness of 2, and brittle. Four other allotropes of selenium are metastable. These include two monoclinic red allotropes and two amorphous allotropes, one of which is red and one of which is black. The red allotrope converts to the black allotrope in the presence of heat. The gray allotrope of selenium is made from spirals on selenium atoms, while one of the red allotropes is made of stacks of selenium rings (Se8).
Tellurium is not known to have any allotropes, although its typical form is hexagonal. Polonium has two allotropes, which are known as α-polonium and β-polonium. α-polonium has a cubic crystal structure and converts to the rhombohedral β-polonium at 36 °C.
The chalcogens have varying crystal structures. Oxygen's crystal structure is monoclinic, sulfur's is orthorhombic, selenium and tellurium have the hexagonal crystal structure, while polonium has a cubic crystal structure.
Chemical
Oxygen, sulfur, and selenium are nonmetals, and tellurium is a metalloid, meaning that its chemical properties are between those of a metal and those of a nonmetal. It is not certain whether polonium is a metal or a metalloid. Some sources refer to polonium as a metalloid, although it has some metallic properties. Also, some allotropes of selenium display characteristics of a metalloid, even though selenium is usually considered a nonmetal. Even though oxygen is a chalcogen, its chemical properties are different from those of other chalcogens. One reason for this is that the heavier chalcogens have vacant d-orbitals. Oxygen's electronegativity is also much higher than those of the other chalcogens. This makes oxygen's electric polarizability several times lower than those of the other chalcogens.
For covalent bonding a chalcogen may accept two electrons according to the octet rule, leaving two lone pairs. When an atom forms two single bonds, they form an angle between 90° and 120°. In 1+ cations, such as , a chalcogen forms three molecular orbitals arranged in a trigonal pyramidal fashion and one lone pair. Double bonds are also common in chalcogen compounds, for example in chalcogenates (see below).
The oxidation number of the most common chalcogen compounds with positive metals is −2. However the tendency for chalcogens to form compounds in the −2 state decreases towards the heavier chalcogens. Other oxidation numbers, such as −1 in pyrite and peroxide, do occur. The highest formal oxidation number is +6. This oxidation number is found in sulfates, selenates, tellurates, polonates, and their corresponding acids, such as sulfuric acid.
Oxygen is the most electronegative element except for fluorine, and forms compounds with almost all of the chemical elements, including some of the noble gases. It commonly bonds with many metals and metalloids to form oxides, including iron oxide, titanium oxide, and silicon oxide. Oxygen's most common oxidation state is −2, and the oxidation state −1 is also relatively common. With hydrogen it forms water and hydrogen peroxide. Organic oxygen compounds are ubiquitous in organic chemistry.
Sulfur's oxidation states are −2, +2, +4, and +6. Sulfur-containing analogs of oxygen compounds often have the prefix thio-. Sulfur's chemistry is similar to oxygen's, in many ways. One difference is that sulfur-sulfur double bonds are far weaker than oxygen-oxygen double bonds, but sulfur-sulfur single bonds are stronger than oxygen-oxygen single bonds. Organic sulfur compounds such as thiols have a strong specific smell, and a few are utilized by some organisms.
Selenium's oxidation states are −2, +4, and +6. Selenium, like most chalcogens, bonds with oxygen. There are some organic selenium compounds, such as selenoproteins. Tellurium's oxidation states are −2, +2, +4, and +6. Tellurium forms the oxides tellurium monoxide, tellurium dioxide, and tellurium trioxide. Polonium's oxidation states are +2 and +4.
There are many acids containing chalcogens, including sulfuric acid, sulfurous acid, selenic acid, and telluric acid. All hydrogen chalcogenides are toxic except for water. Oxygen ions often come in the forms of oxide ions (), peroxide ions (), and hydroxide ions (). Sulfur ions generally come in the form of sulfides (), bisulfides (), sulfites (), sulfates (), and thiosulfates (). Selenium ions usually come in the form of selenides (), selenites () and selenates (). Tellurium ions often come in the form of tellurates (). Molecules containing metal bonded to chalcogens are common as minerals. For example, pyrite (FeS2) is an iron ore, and the rare mineral calaverite is the ditelluride .
Although all group 16 elements of the periodic table, including oxygen, can be defined as chalcogens, oxygen and oxides are usually distinguished from chalcogens and chalcogenides. The term chalcogenide is more commonly reserved for sulfides, selenides, and tellurides, rather than for oxides.
Except for polonium, the chalcogens are all fairly similar to each other chemically. They all form X2− ions when reacting with electropositive metals.
Sulfide minerals and analogous compounds produce gases upon reaction with oxygen.
Compounds
With halogens
Chalcogens also form compounds with halogens known as chalcohalides, or chalcogen halides. The majority of simple chalcogen halides are well-known and widely used as chemical reagents. However, more complicated chalcogen halides, such as sulfenyl, sulfonyl, and sulfuryl halides, are less well known to science. Out of the compounds consisting purely of chalcogens and halogens, there are a total of 13 chalcogen fluorides, nine chalcogen chlorides, eight chalcogen bromides, and six chalcogen iodides that are known. The heavier chalcogen halides often have significant molecular interactions. Sulfur fluorides with low valences are fairly unstable and little is known about their properties. However, sulfur fluorides with high valences, such as sulfur hexafluoride, are stable and well-known. Sulfur tetrafluoride is also a well-known sulfur fluoride. Certain selenium fluorides, such as selenium difluoride, have been produced in small amounts. The crystal structures of both selenium tetrafluoride and tellurium tetrafluoride are known. Chalcogen chlorides and bromides have also been explored. In particular, selenium dichloride and sulfur dichloride can react to form organic selenium compounds. Dichalcogen dihalides, such as Se2Cl2 also are known to exist. There are also mixed chalcogen-halogen compounds. These include SeSX, with X being chlorine or bromine. Such compounds can form in mixtures of sulfur dichloride and selenium halides. These compounds have been fairly recently structurally characterized, as of 2008. In general, diselenium and disulfur chlorides and bromides are useful chemical reagents. Chalcogen halides with attached metal atoms are soluble in organic solutions. One example of such a compound is . Unlike selenium chlorides and bromides, selenium iodides have not been isolated, as of 2008, although it is likely that they occur in solution. Diselenium diiodide, however, does occur in equilibrium with selenium atoms and iodine molecules. Some tellurium halides with low valences, such as and , form polymers when in the solid state. These tellurium halides can be synthesized by the reduction of pure tellurium with superhydride and reacting the resulting product with tellurium tetrahalides. Ditellurium dihalides tend to get less stable as the halides become lower in atomic number and atomic mass. Tellurium also forms iodides with even fewer iodine atoms than diiodides. These include TeI and Te2I. These compounds have extended structures in the solid state. Halogens and chalcogens can also form halochalcogenate anions.
Organic
Alcohols, phenols and other similar compounds contain oxygen. However, in thiols, selenols and tellurols; sulfur, selenium, and tellurium replace oxygen. Thiols are better known than selenols or tellurols. Aside from alcohols, thiols are the most stable chalcogenols and tellurols are the least stable, being unstable in heat or light. Other organic chalcogen compounds include thioethers, selenoethers and telluroethers. Some of these, such as dimethyl sulfide, diethyl sulfide, and dipropyl sulfide are commercially available. Selenoethers are in the form of R2Se or RSeR. Telluroethers such as dimethyl telluride are typically prepared in the same way as thioethers and selenoethers. Organic chalcogen compounds, especially organic sulfur compounds, have the tendency to smell unpleasant. Dimethyl telluride also smells unpleasant, and selenophenol is renowned for its "metaphysical stench". There are also thioketones, selenoketones, and telluroketones. Out of these, thioketones are the most well-studied with 80% of chalcogenoketones papers being about them. Selenoketones make up 16% of such papers and telluroketones make up 4% of them. Thioketones have well-studied non-linear electric and photophysical properties. Selenoketones are less stable than thioketones and telluroketones are less stable than selenoketones. Telluroketones have the highest level of polarity of chalcogenoketones.
With metals
There is a very large number of metal chalcogenides. There are also ternary compounds containing alkali metals and transition metals. Highly metal-rich metal chalcogenides, such as Lu7Te and Lu8Te have domains of the metal's crystal lattice containing chalcogen atoms. While these compounds do exist, analogous chemicals that contain lanthanum, praseodymium, gadolinium, holmium, terbium, or ytterbium have not been discovered, as of 2008. The boron group metals aluminum, gallium, and indium also form bonds to chalcogens. The Ti3+ ion forms chalcogenide dimers such as TiTl5Se8. Metal chalcogenide dimers also occur as lower tellurides, such as Zr5Te6.
Elemental chalcogens react with certain lanthanide compounds to form lanthanide clusters rich in chalcogens. Uranium(IV) chalcogenol compounds also exist. There are also transition metal chalcogenols which have potential to serve as catalysts and stabilize nanoparticles.
With pnictogens
Compounds with chalcogen-phosphorus bonds have been explored for more than 200 years. These compounds include unsophisticated phosphorus chalcogenides as well as large molecules with biological roles and phosphorus-chalcogen compounds with metal clusters. These compounds have numerous applications, including organo-phosphate insecticides, strike-anywhere matches and quantum dots. A total of 130,000 compounds with at least one phosphorus-sulfur bond, 6000 compounds with at least one phosphorus-selenium bond, and 350 compounds with at least one phosphorus-tellurium bond have been discovered. The decrease in the number of chalcogen-phosphorus compounds further down the periodic table is due to diminishing bond strength. Such compounds tend to have at least one phosphorus atom in the center, surrounded by four chalcogens and side chains. However, some phosphorus-chalcogen compounds also contain hydrogen (such as secondary phosphine chalcogenides) or nitrogen (such as dichalcogenoimidodiphosphates). Phosphorus selenides are typically harder to handle that phosphorus sulfides, and compounds in the form PxTey have not been discovered. Chalcogens also bond with other pnictogens, such as arsenic, antimony, and bismuth. Heavier chalcogen pnictides tend to form ribbon-like polymers instead of individual molecules. Chemical formulas of these compounds include Bi2S3 and Sb2Se3. Ternary chalcogen pnictides are also known. Examples of these include P4O6Se and P3SbS3. salts containing chalcogens and pnictogens also exist. Almost all chalcogen pnictide salts are typically in the form of [PnxE4x]3−, where Pn is a pnictogen and E is a chalcogen. Tertiary phosphines can react with chalcogens to form compounds in the form of R3PE, where E is a chalcogen. When E is sulfur, these compounds are relatively stable, but they are less so when E is selenium or tellurium. Similarly, secondary phosphines can react with chalcogens to form secondary phosphine chalcogenides. However, these compounds are in a state of equilibrium with chalcogenophosphinous acid. Secondary phosphine chalcogenides are weak acids. Binary compounds consisting of antimony or arsenic and a chalcogen. These compounds tend to be colorful and can be created by a reaction of the constituent elements at temperatures of .
Other
Chalcogens form single bonds and double bonds with other carbon group elements than carbon, such as silicon, germanium, and tin. Such compounds typically form from a reaction of carbon group halides and chalcogenol salts or chalcogenol bases. Cyclic compounds with chalcogens, carbon group elements, and boron atoms exist, and occur from the reaction of boron dichalcogenates and carbon group metal halides. Compounds in the form of M-E, where M is silicon, germanium, or tin, and E is sulfur, selenium or tellurium have been discovered. These form when carbon group hydrides react or when heavier versions of carbenes react. Sulfur and tellurium can bond with organic compounds containing both silicon and phosphorus.
All of the chalcogens form hydrides. In some cases this occurs with chalcogens bonding with two hydrogen atoms. However tellurium hydride and polonium hydride are both volatile and highly labile. Also, oxygen can bond to hydrogen in a 1:1 ratio as in hydrogen peroxide, but this compound is unstable.
Chalcogen compounds form a number of interchalcogens. For instance, sulfur forms the toxic sulfur dioxide and sulfur trioxide. Tellurium also forms oxides. There are some chalcogen sulfides as well. These include selenium sulfide, an ingredient in some shampoos.
Since 1990, a number of borides with chalcogens bonded to them have been detected. The chalcogens in these compounds are mostly sulfur, although some do contain selenium instead. One such chalcogen boride consists of two molecules of dimethyl sulfide attached to a boron-hydrogen molecule. Other important boron-chalcogen compounds include macropolyhedral systems. Such compounds tend to feature sulfur as the chalcogen. There are also chalcogen borides with two, three, or four chalcogens. Many of these contain sulfur but some, such as Na2B2Se7 contain selenium instead.
History
Early discoveries
Sulfur has been known since ancient times and is mentioned in the Bible fifteen times. It was known to the ancient Greeks and commonly mined by the ancient Romans. In the Middle Ages, it was a key part of alchemical experiments. In the 1700s and 1800s, scientists Joseph Louis Gay-Lussac and Louis-Jacques Thénard proved sulfur to be a chemical element.
Early attempts to separate oxygen from air were hampered by the fact that air was thought of as a single element up to the 17th and 18th centuries. Robert Hooke, Mikhail Lomonosov, Ole Borch, and Pierre Bayden all successfully created oxygen, but did not realize it at the time. Oxygen was discovered by Joseph Priestley in 1774 when he focused sunlight on a sample of mercuric oxide and collected the resulting gas. Carl Wilhelm Scheele had also created oxygen in 1771 by the same method, but Scheele did not publish his results until 1777.
Tellurium was first discovered in 1783 by Franz Joseph Müller von Reichenstein. He discovered tellurium in a sample of what is now known as calaverite. Müller assumed at first that the sample was pure antimony, but tests he ran on the sample did not agree with this. Muller then guessed that the sample was bismuth sulfide, but tests confirmed that the sample was not that. For some years, Muller pondered the problem. Eventually he realized that the sample was gold bonded with an unknown element. In 1796, Müller sent part of the sample to the German chemist Martin Klaproth, who purified the undiscovered element. Klaproth decided to call the element tellurium after the Latin word for earth.
Selenium was discovered in 1817 by Jöns Jacob Berzelius. Berzelius noticed a reddish-brown sediment at a sulfuric acid manufacturing plant. The sample was thought to contain arsenic. Berzelius initially thought that the sediment contained tellurium, but came to realize that it also contained a new element, which he named selenium after the Greek moon goddess Selene.
Periodic table placing
Three of the chalcogens (sulfur, selenium, and tellurium) were part of the discovery of periodicity, as they are among a series of triads of elements in the same group that were noted by Johann Wolfgang Döbereiner as having similar properties. Around 1865 John Newlands produced a series of papers where he listed the elements in order of increasing atomic weight and similar physical and chemical properties that recurred at intervals of eight; he likened such periodicity to the octaves of music. His version included a "group b" consisting of oxygen, sulfur, selenium, tellurium, and osmium.
After 1869, Dmitri Mendeleev proposed his periodic table placing oxygen at the top of "group VI" above sulfur, selenium, and tellurium. Chromium, molybdenum, tungsten, and uranium were sometimes included in this group, but they would be later rearranged as part of group VIB; uranium would later be moved to the actinide series. Oxygen, along with sulfur, selenium, tellurium, and later polonium would be grouped in group VIA, until the group's name was changed to group 16 in 1988.
Modern discoveries
In the late 19th century, Marie Curie and Pierre Curie discovered that a sample of pitchblende was emitting four times as much radioactivity as could be explained by the presence of uranium alone. The Curies gathered several tons of pitchblende and refined it for several months until they had a pure sample of polonium. The discovery officially took place in 1898. Prior to the invention of particle accelerators, the only way to produce polonium was to extract it over several months from uranium ore.
The first attempt at creating livermorium was from 1976 to 1977 at the LBNL, who bombarded curium-248 with calcium-48, but were not successful. After several failed attempts in 1977, 1998, and 1999 by research groups in Russia, Germany, and the US, livermorium was created successfully in 2000 at the Joint Institute for Nuclear Research by bombarding curium-248 atoms with calcium-48 atoms. The element was known as ununhexium until it was officially named livermorium in 2012.
Names and etymology
In the 19th century, Jons Jacob Berzelius suggested calling the elements in group 16 "amphigens", as the elements in the group formed amphid salts (salts of oxyacids, formerly regarded as composed of two oxides, an acid and a basic oxide). The term received some use in the early 1800s but is now obsolete. The name chalcogen comes from the Greek words (, literally "copper"), and (, born, gender, kindle). It was first used in 1932 by Wilhelm Biltz's group at Leibniz University Hannover, where it was proposed by Werner Fischer. The word "chalcogen" gained popularity in Germany during the 1930s because the term was analogous to "halogen". Although the literal meanings of the modern Greek words imply that chalcogen means "copper-former", this is misleading because the chalcogens have nothing to do with copper in particular. "Ore-former" has been suggested as a better translation, as the vast majority of metal ores are chalcogenides and the word in ancient Greek was associated with metals and metal-bearing rock in general; copper, and its alloy bronze, was one of the first metals to be used by humans.
Oxygen's name comes from the Greek words oxy genes, meaning "acid-forming". Sulfur's name comes from either the Latin word or the Sanskrit word ; both of those terms are ancient words for sulfur. Selenium is named after the Greek goddess of the moon, Selene, to match the previously discovered element tellurium, whose name comes from the Latin word , meaning earth. Polonium is named after Marie Curie's country of birth, Poland. Livermorium is named for the Lawrence Livermore National Laboratory.
Occurrence
The four lightest chalcogens (oxygen, sulfur, selenium, and tellurium) are all primordial elements on Earth. Sulfur and oxygen occur as constituent copper ores and selenium and tellurium occur in small traces in such ores. Polonium forms naturally from the decay of other elements, even though it is not primordial. Livermorium does not occur naturally at all.
Oxygen makes up 21% of the atmosphere by weight, 89% of water by weight, 46% of the Earth's crust by weight, and 65% of the human body. Oxygen also occurs in many minerals, being found in all oxide minerals and hydroxide minerals, and in numerous other mineral groups. Stars of at least eight times the mass of the Sun also produce oxygen in their cores via nuclear fusion. Oxygen is the third-most abundant element in the universe, making up 1% of the universe by weight.
Sulfur makes up 0.035% of the Earth's crust by weight, making it the 17th most abundant element there and makes up 0.25% of the human body. It is a major component of soil. Sulfur makes up 870 parts per million of seawater and about 1 part per billion of the atmosphere. Sulfur can be found in elemental form or in the form of sulfide minerals, sulfate minerals, or sulfosalt minerals. Stars of at least 12 times the mass of the Sun produce sulfur in their cores via nuclear fusion. Sulfur is the tenth most abundant element in the universe, making up 500 parts per million of the universe by weight.
Selenium makes up 0.05 parts per million of the Earth's crust by weight. This makes it the 67th most abundant element in the Earth's crust. Selenium makes up on average 5 parts per million of the soils. Seawater contains around 200 parts per trillion of selenium. The atmosphere contains 1 nanogram of selenium per cubic meter. There are mineral groups known as selenates and selenites, but there are not many minerals in these groups. Selenium is not produced directly by nuclear fusion. Selenium makes up 30 parts per billion of the universe by weight.
There are only 5 parts per billion of tellurium in the Earth's crust and 15 parts per billion of tellurium in seawater. Tellurium is one of the eight or nine least abundant elements in the Earth's crust. There are a few dozen tellurate minerals and telluride minerals, and tellurium occurs in some minerals with gold, such as sylvanite and calaverite. Tellurium makes up 9 parts per billion of the universe by weight.
Polonium only occurs in trace amounts on Earth, via radioactive decay of uranium and thorium. It is present in uranium ores in concentrations of 100 micrograms per metric ton. Very minute amounts of polonium exist in the soil and thus in most food, and thus in the human body. The Earth's crust contains less than 1 part per billion of polonium, making it one of the ten rarest metals on Earth.
Livermorium is always produced artificially in particle accelerators. Even when it is produced, only a small number of atoms are synthesized at a time.
Chalcophile elements
Chalcophile elements are those that remain on or close to the surface because they combine readily with chalcogens other than oxygen, forming compounds which do not sink into the core. Chalcophile ("chalcogen-loving") elements in this context are those metals and heavier nonmetals that have a low affinity for oxygen and prefer to bond with the heavier chalcogen sulfur as sulfides. Because sulfide minerals are much denser than the silicate minerals formed by lithophile elements, chalcophile elements separated below the lithophiles at the time of the first crystallisation of the Earth's crust. This has led to their depletion in the Earth's crust relative to their solar abundances, though this depletion has not reached the levels found with siderophile elements.
Production
Approximately 100 million metric tons of oxygen are produced yearly. Oxygen is most commonly produced by fractional distillation, in which air is cooled to a liquid, then warmed, allowing all the components of air except for oxygen to turn to gases and escape. Fractionally distilling air several times can produce 99.5% pure oxygen. Another method with which oxygen is produced is to send a stream of dry, clean air through a bed of molecular sieves made of zeolite, which absorbs the nitrogen in the air, leaving 90 to 93% pure oxygen.
Sulfur can be mined in its elemental form, although this method is no longer as popular as it used to be. In 1865 a large deposit of elemental sulfur was discovered in the U.S. states of Louisiana and Texas, but it was difficult to extract at the time. In the 1890s, Herman Frasch came up with the solution of liquefying the sulfur with superheated steam and pumping the sulfur up to the surface. These days sulfur is instead more often extracted from oil, natural gas, and tar.
The world production of selenium is around 1500 metric tons per year, out of which roughly 10% is recycled. Japan is the largest producer, producing 800 metric tons of selenium per year. Other large producers include Belgium (300 metric tons per year), the United States (over 200 metric tons per year), Sweden (130 metric tons per year), and Russia (100 metric tons per year). Selenium can be extracted from the waste from the process of electrolytically refining copper. Another method of producing selenium is to farm selenium-gathering plants such as milk vetch. This method could produce three kilograms of selenium per acre, but is not commonly practiced.
Tellurium is mostly produced as a by-product of the processing of copper. Tellurium can also be refined by electrolytic reduction of sodium telluride. The world production of tellurium is between 150 and 200 metric tons per year. The United States is one of the largest producers of tellurium, producing around 50 metric tons per year. Peru, Japan, and Canada are also large producers of tellurium.
Until the creation of nuclear reactors, all polonium had to be extracted from uranium ore. In modern times, most isotopes of polonium are produced by bombarding bismuth with neutrons. Polonium can also be produced by high neutron fluxes in nuclear reactors. Approximately 100 grams of polonium are produced yearly. All the polonium produced for commercial purposes is made in the Ozersk nuclear reactor in Russia. From there, it is taken to Samara, Russia for purification, and from there to St. Petersburg for distribution. The United States is the largest consumer of polonium.
All livermorium is produced artificially in particle accelerators. The first successful production of livermorium was achieved by bombarding curium-248 atoms with calcium-48 atoms. As of 2011, roughly 25 atoms of livermorium had been synthesized.
Applications
Metabolism is the most important source and use of oxygen. Minor industrial uses include Steelmaking (55% of all purified oxygen produced), the chemical industry (25% of all purified oxygen), medical use, water treatment (as oxygen kills some types of bacteria), rocket fuel (in liquid form), and metal cutting.
Most sulfur produced is transformed into sulfur dioxide, which is further transformed into sulfuric acid, a very common industrial chemical. Other common uses include being a key ingredient of gunpowder and Greek fire, and being used to change soil pH. Sulfur is also mixed into rubber to vulcanize it. Sulfur is used in some types of concrete and fireworks. 60% of all sulfuric acid produced is used to generate phosphoric acid. Sulfur is used as a pesticide (specifically as an acaricide and fungicide) on "orchard, ornamental, vegetable, grain, and other crops."
Around 40% of all selenium produced goes to glassmaking. 30% of all selenium produced goes to metallurgy, including manganese production. 15% of all selenium produced goes to agriculture. Electronics such as photovoltaic materials claim 10% of all selenium produced. Pigments account for 5% of all selenium produced. Historically, machines such as photocopiers and light meters used one-third of all selenium produced, but this application is in steady decline.
Tellurium suboxide, a mixture of tellurium and tellurium dioxide, is used in the rewritable data layer of some CD-RW disks and DVD-RW disks. Bismuth telluride is also used in many microelectronic devices, such as photoreceptors. Tellurium is sometimes used as an alternative to sulfur in vulcanized rubber. Cadmium telluride is used as a high-efficiency material in solar panels.
Some of polonium's applications relate to the element's radioactivity. For instance, polonium is used as an alpha-particle generator for research. Polonium alloyed with beryllium provides an efficient neutron source. Polonium is also used in nuclear batteries. Most polonium is used in antistatic devices. Livermorium does not have any uses whatsoever due to its extreme rarity and short half-life.
Organochalcogen compounds are involved in the semiconductor process. These compounds also feature into ligand chemistry and biochemistry. One application of chalcogens themselves is to manipulate redox couples in supramolecular chemistry (chemistry involving non-covalent bond interactions). This application leads on to such applications as crystal packing, assembly of large molecules, and biological recognition of patterns. The secondary bonding interactions of the larger chalcogens, selenium and tellurium, can create organic solvent-holding acetylene nanotubes. Chalcogen interactions are useful for conformational analysis and stereoelectronic effects, among other things. Chalcogenides with through bonds also have applications. For instance, divalent sulfur can stabilize carbanions, cationic centers, and radical. Chalcogens can confer upon ligands (such as DCTO) properties such as being able to transform Cu(II) to Cu(I). Studying chalcogen interactions gives access to radical cations, which are used in mainstream synthetic chemistry. Metallic redox centers of biological importance are tunable by interactions of ligands containing chalcogens, such as methionine and selenocysteine. Also, chalcogen through-bonds can provide insight about the process of electron transfer.
Biological role
Oxygen is needed by almost all organisms for the purpose of generating ATP. It is also a key component of most other biological compounds, such as water, amino acids and DNA. Human blood contains a large amount of oxygen. Human bones contain 28% oxygen. Human tissue contains 16% oxygen. A typical 70-kilogram human contains 43 kilograms of oxygen, mostly in the form of water.
All animals need significant amounts of sulfur. Some amino acids, such as cysteine and methionine contain sulfur. Plant roots take up sulfate ions from the soil and reduce it to sulfide ions. Metalloproteins also use sulfur to attach to useful metal atoms in the body and sulfur similarly attaches itself to poisonous metal atoms like cadmium to haul them to the safety of the liver. On average, humans consume 900 milligrams of sulfur each day. Sulfur compounds, such as those found in skunk spray often have strong odors.
All animals and some plants need trace amounts of selenium, but only for some specialized enzymes. Humans consume on average between 6 and 200 micrograms of selenium per day. Mushrooms and brazil nuts are especially noted for their high selenium content. Selenium in foods is most commonly found in the form of amino acids such as selenocysteine and selenomethionine. Selenium can protect against heavy metal poisoning.
Tellurium is not known to be needed for animal life, although a few fungi can incorporate it in compounds in place of selenium. Microorganisms also absorb tellurium and emit dimethyl telluride. Most tellurium in the blood stream is excreted slowly in urine, but some is converted to dimethyl telluride and released through the lungs. On average, humans ingest about 600 micrograms of tellurium daily. Plants can take up some tellurium from the soil. Onions and garlic have been found to contain as much as 300 parts per million of tellurium in dry weight.
Polonium has no biological role, and is highly toxic on account of being radioactive.
Toxicity
Oxygen is generally nontoxic, but oxygen toxicity has been reported when it is used in high concentrations. In both elemental gaseous form and as a component of water, it is vital to almost all life on Earth. Despite this, liquid oxygen is highly dangerous. Even gaseous oxygen is dangerous in excess. For instance, sports divers have occasionally drowned from convulsions caused by breathing pure oxygen at a depth of more than underwater. Oxygen is also toxic to some bacteria. Ozone, an allotrope of oxygen, is toxic to most life. It can cause lesions in the respiratory tract.
Sulfur is generally nontoxic and is even a vital nutrient for humans. However, in its elemental form it can cause redness in the eyes and skin, a burning sensation and a cough if inhaled, a burning sensation and diarrhoea and/or catharsis if ingested, and can irritate the mucous membranes. An excess of sulfur can be toxic for cows because microbes in the rumens of cows produce toxic hydrogen sulfide upon reaction with sulfur. Many sulfur compounds, such as hydrogen sulfide (H2S) and sulfur dioxide (SO2) are highly toxic.
Selenium is a trace nutrient required by humans on the order of tens or hundreds of micrograms per day. A dose of over 450 micrograms can be toxic, resulting in bad breath and body odor. Extended, low-level exposure, which can occur at some industries, results in weight loss, anemia, and dermatitis. In many cases of selenium poisoning, selenous acid is formed in the body. Hydrogen selenide (H2Se) is highly toxic.
Exposure to tellurium can produce unpleasant side effects. As little as 10 micrograms of tellurium per cubic meter of air can cause notoriously unpleasant breath, described as smelling like rotten garlic. Acute tellurium poisoning can cause vomiting, gut inflammation, internal bleeding, and respiratory failure. Extended, low-level exposure to tellurium causes tiredness and indigestion. Sodium tellurite (Na2TeO3) is lethal in amounts of around 2 grams.
Polonium is dangerous as an alpha particle emitter. If ingested, polonium-210 is a million times as toxic as hydrogen cyanide by weight; it has been used as a murder weapon in the past, most famously to kill Alexander Litvinenko. Polonium poisoning can cause nausea, vomiting, anorexia, and lymphopenia. It can also damage hair follicles and white blood cells. Polonium-210 is only dangerous if ingested or inhaled because its alpha particle emissions cannot penetrate human skin. Polonium-209 is also toxic, and can cause leukemia.
Amphid salts
Amphid salts was a name given by Jons Jacob Berzelius in the 19th century for chemical salts derived from the 16th group of the periodic table which included oxygen, sulfur, selenium, and tellurium. The term received some use in the early 1800s but is now obsolete. The current term in use for the 16th group is chalcogens.
See also
Chalcogenide
Gold chalcogenides
Halogen
Interchalcogen
Pnictogen
References
External links
Periodic table
Groups (periodic table) | Chalcogen | [
"Chemistry"
] | 9,678 | [
"Periodic table",
"Groups (periodic table)"
] |
5,906 | https://en.wikipedia.org/wiki/Carbon%20dioxide | Carbon dioxide is a chemical compound with the chemical formula . It is made up of molecules that each have one carbon atom covalently double bonded to two oxygen atoms. It is found in the gas state at room temperature and at normally-encountered concentrations it is odorless. As the source of carbon in the carbon cycle, atmospheric is the primary carbon source for life on Earth. In the air, carbon dioxide is transparent to visible light but absorbs infrared radiation, acting as a greenhouse gas. Carbon dioxide is soluble in water and is found in groundwater, lakes, ice caps, and seawater.
It is a trace gas in Earth's atmosphere at 421 parts per million (ppm), or about 0.042% (as of May 2022) having risen from pre-industrial levels of 280 ppm or about 0.028%. Burning fossil fuels is the main cause of these increased concentrations, which are the primary cause of climate change.
Its concentration in Earth's pre-industrial atmosphere since late in the Precambrian was regulated by organisms and geological features. Plants, algae and cyanobacteria use energy from sunlight to synthesize carbohydrates from carbon dioxide and water in a process called photosynthesis, which produces oxygen as a waste product. In turn, oxygen is consumed and is released as waste by all aerobic organisms when they metabolize organic compounds to produce energy by respiration. is released from organic materials when they decay or combust, such as in forest fires. When carbon dioxide dissolves in water, it forms carbonate and mainly bicarbonate (), which causes ocean acidification as atmospheric levels increase.
Carbon dioxide is 53% more dense than dry air, but is long lived and thoroughly mixes in the atmosphere. About half of excess emissions to the atmosphere are absorbed by land and ocean carbon sinks. These sinks can become saturated and are volatile, as decay and wildfires result in the being released back into the atmosphere. is eventually sequestered (stored for the long term) in rocks and organic deposits like coal, petroleum and natural gas.
Nearly all produced by humans goes into the atmosphere. Less than 1% of produced annually is put to commercial use, mostly in the fertilizer industry and in the oil and gas industry for enhanced oil recovery. Other commercial applications include food and beverage production, metal fabrication, cooling, fire suppression and stimulating plant growth in greenhouses.
Chemical and physical properties
Structure, bonding and molecular vibrations
The symmetry of a carbon dioxide molecule is linear and centrosymmetric at its equilibrium geometry. The length of the carbon–oxygen bond in carbon dioxide is 116.3 pm, noticeably shorter than the roughly 140 pm length of a typical single C–O bond, and shorter than most other C–O multiply bonded functional groups such as carbonyls. Since it is centrosymmetric, the molecule has no electric dipole moment.
As a linear triatomic molecule, has four vibrational modes as shown in the diagram. In the symmetric and the antisymmetric stretching modes, the atoms move along the axis of the molecule. There are two bending modes, which are degenerate, meaning that they have the same frequency and same energy, because of the symmetry of the molecule. When a molecule touches a surface or touches another molecule, the two bending modes can differ in frequency because the interaction is different for the two modes. Some of the vibrational modes are observed in the infrared (IR) spectrum: the antisymmetric stretching mode at wavenumber 2349 cm−1 (wavelength 4.25 μm) and the degenerate pair of bending modes at 667 cm−1 (wavelength 15.0 μm). The symmetric stretching mode does not create an electric dipole so is not observed in IR spectroscopy, but it is detected in Raman spectroscopy at 1388 cm−1 (wavelength 7.20 μm), with a Fermi resonance doublet at 1285 cm−1.
In the gas phase, carbon dioxide molecules undergo significant vibrational motions and do not keep a fixed structure. However, in a Coulomb explosion imaging experiment, an instantaneous image of the molecular structure can be deduced. Such an experiment has been performed for carbon dioxide. The result of this experiment, and the conclusion of theoretical calculations based on an ab initio potential energy surface of the molecule, is that none of the molecules in the gas phase are ever exactly linear. This counter-intuitive result is trivially due to the fact that the nuclear motion volume element vanishes for linear geometries. This is so for all molecules except diatomic molecules.
In aqueous solution
Carbon dioxide is soluble in water, in which it reversibly forms (carbonic acid), which is a weak acid, because its ionization in water is incomplete.
The hydration equilibrium constant of carbonic acid is, at 25 °C:
Hence, the majority of the carbon dioxide is not converted into carbonic acid, but remains as molecules, not affecting the pH.
The relative concentrations of , , and the deprotonated forms (bicarbonate) and (carbonate) depend on the pH. As shown in a Bjerrum plot, in neutral or slightly alkaline water (pH > 6.5), the bicarbonate form predominates (>50%) becoming the most prevalent (>95%) at the pH of seawater. In very alkaline water (pH > 10.4), the predominant (>50%) form is carbonate. The oceans, being mildly alkaline with typical pH = 8.2–8.5, contain about 120 mg of bicarbonate per liter.
Being diprotic, carbonic acid has two acid dissociation constants, the first one for the dissociation into the bicarbonate (also called hydrogen carbonate) ion ():
Ka1 = 2.5 × 10−4 mol/L; pKa1 = 3.6 at 25 °C.
This is the true first acid dissociation constant, defined as
where the denominator includes only covalently bound and does not include hydrated (aq). The much smaller and often-quoted value near 4.16 × 10−7 (or pKa1 = 6.38) is an apparent value calculated on the (incorrect) assumption that all dissolved is present as carbonic acid, so that
Since most of the dissolved remains as molecules, Ka1(apparent) has a much larger denominator and a much smaller value than the true Ka1.
The bicarbonate ion is an amphoteric species that can act as an acid or as a base, depending on pH of the solution. At high pH, it dissociates significantly into the carbonate ion ():
Ka2 = 4.69 × 10−11 mol/L; pKa2 = 10.329
In organisms, carbonic acid production is catalysed by the enzyme known as carbonic anhydrase.
In addition to altering its acidity, the presence of carbon dioxide in water also affects its electrical properties. When carbon dioxide dissolves in desalinated water, the electrical conductivity increases significantly from below 1 μS/cm to nearly 30 μS/cm. When heated, the water begins to gradually lose the conductivity induced by the presence of , especially noticeable as temperatures exceed 30 °C.
The temperature dependence of the electrical conductivity of fully deionized water without saturation is comparably low in relation to these data.
Chemical reactions
is a potent electrophile having an electrophilic reactivity that is comparable to benzaldehyde or strongly electrophilic α,β-unsaturated carbonyl compounds. However, unlike electrophiles of similar reactivity, the reactions of nucleophiles with are thermodynamically less favored and are often found to be highly reversible. The reversible reaction of carbon dioxide with amines to make carbamates is used in scrubbers and has been suggested as a possible starting point for carbon capture and storage by amine gas treating.
Only very strong nucleophiles, like the carbanions provided by Grignard reagents and organolithium compounds react with to give carboxylates:
where M = Li or MgBr and R = alkyl or aryl.
In metal carbon dioxide complexes, serves as a ligand, which can facilitate the conversion of to other chemicals.
The reduction of to CO is ordinarily a difficult and slow reaction:
The redox potential for this reaction near pH 7 is about −0.53 V versus the standard hydrogen electrode. The nickel-containing enzyme carbon monoxide dehydrogenase catalyses this process.
Photoautotrophs (i.e. plants and cyanobacteria) use the energy contained in sunlight to photosynthesize simple sugars from absorbed from the air and water:
Physical properties
Carbon dioxide is colorless. At low concentrations, the gas is odorless; however, at sufficiently high concentrations, it has a sharp, acidic odor. At standard temperature and pressure, the density of carbon dioxide is around 1.98 kg/m3, about 1.53 times that of air.
Carbon dioxide has no liquid state at pressures below 0.51795(10) MPa (5.11177(99) atm). At a pressure of 1 atm (0.101325 MPa), the gas deposits directly to a solid at temperatures below 194.6855(30) K (−78.4645(30) °C) and the solid sublimes directly to a gas above this temperature. In its solid state, carbon dioxide is commonly called dry ice.
Liquid carbon dioxide forms only at pressures above 0.51795(10) MPa (5.11177(99) atm); the triple point of carbon dioxide is 216.592(3) K (−56.558(3) °C) at 0.51795(10) MPa (5.11177(99) atm) (see phase diagram). The critical point is 304.128(15) K (30.978(15) °C) at 7.3773(30) MPa (72.808(30) atm). Another form of solid carbon dioxide observed at high pressure is an amorphous glass-like solid. This form of glass, called carbonia, is produced by supercooling heated at extreme pressures (40–48 GPa, or about 400,000 atmospheres) in a diamond anvil. This discovery confirmed the theory that carbon dioxide could exist in a glass state similar to other members of its elemental family, like silicon dioxide (silica glass) and germanium dioxide. Unlike silica and germania glasses, however, carbonia glass is not stable at normal pressures and reverts to gas when pressure is released.
At temperatures and pressures above the critical point, carbon dioxide behaves as a supercritical fluid known as supercritical carbon dioxide.
Table of thermal and physical properties of saturated liquid carbon dioxide:
Table of thermal and physical properties of carbon dioxide () at atmospheric pressure:
Biological role
Carbon dioxide is an end product of cellular respiration in organisms that obtain energy by breaking down sugars, fats and amino acids with oxygen as part of their metabolism. This includes all plants, algae and animals and aerobic fungi and bacteria. In vertebrates, the carbon dioxide travels in the blood from the body's tissues to the skin (e.g., amphibians) or the gills (e.g., fish), from where it dissolves in the water, or to the lungs from where it is exhaled. During active photosynthesis, plants can absorb more carbon dioxide from the atmosphere than they release in respiration.
Photosynthesis and carbon fixation
Carbon fixation is a biochemical process by which atmospheric carbon dioxide is incorporated by plants, algae and cyanobacteria into energy-rich organic molecules such as glucose, thus creating their own food by photosynthesis. Photosynthesis uses carbon dioxide and water to produce sugars from which other organic compounds can be constructed, and oxygen is produced as a by-product.
Ribulose-1,5-bisphosphate carboxylase oxygenase, commonly abbreviated to RuBisCO, is the enzyme involved in the first major step of carbon fixation, the production of two molecules of 3-phosphoglycerate from and ribulose bisphosphate, as shown in the diagram at left.
RuBisCO is thought to be the single most abundant protein on Earth.
Phototrophs use the products of their photosynthesis as internal food sources and as raw material for the biosynthesis of more complex organic molecules, such as polysaccharides, nucleic acids, and proteins. These are used for their own growth, and also as the basis of the food chains and webs that feed other organisms, including animals such as ourselves. Some important phototrophs, the coccolithophores synthesise hard calcium carbonate scales. A globally significant species of coccolithophore is Emiliania huxleyi whose calcite scales have formed the basis of many sedimentary rocks such as limestone, where what was previously atmospheric carbon can remain fixed for geological timescales.
Plants can grow as much as 50% faster in concentrations of 1,000 ppm when compared with ambient conditions, though this assumes no change in climate and no limitation on other nutrients. Elevated levels cause increased growth reflected in the harvestable yield of crops, with wheat, rice and soybean all showing increases in yield of 12–14% under elevated in FACE experiments.
Increased atmospheric concentrations result in fewer stomata developing on plants which leads to reduced water usage and increased water-use efficiency. Studies using FACE have shown that enrichment leads to decreased concentrations of micronutrients in crop plants. This may have knock-on effects on other parts of ecosystems as herbivores will need to eat more food to gain the same amount of protein.
The concentration of secondary metabolites such as phenylpropanoids and flavonoids can also be altered in plants exposed to high concentrations of .
Plants also emit during respiration, and so the majority of plants and algae, which use C3 photosynthesis, are only net absorbers during the day. Though a growing forest will absorb many tons of each year, a mature forest will produce as much from respiration and decomposition of dead specimens (e.g., fallen branches) as is used in photosynthesis in growing plants. Contrary to the long-standing view that they are carbon neutral, mature forests can continue to accumulate carbon and remain valuable carbon sinks, helping to maintain the carbon balance of Earth's atmosphere. Additionally, and crucially to life on earth, photosynthesis by phytoplankton consumes dissolved in the upper ocean and thereby promotes the absorption of from the atmosphere.
Toxicity
Carbon dioxide content in fresh air (averaged between sea-level and 10 kPa level, i.e., about altitude) varies between 0.036% (360 ppm) and 0.041% (412 ppm), depending on the location.
In humans, exposure to CO2 at concentrations greater than 5% causes the development of hypercapnia and respiratory acidosis. Concentrations of 7% to 10% (70,000 to 100,000 ppm) may cause suffocation, even in the presence of sufficient oxygen, manifesting as dizziness, headache, visual and hearing dysfunction, and unconsciousness within a few minutes to an hour. Concentrations of more than 10% may cause convulsions, coma, and death. CO2 levels of more than 30% act rapidly leading to loss of consciousness in seconds.
Because it is heavier than air, in locations where the gas seeps from the ground (due to sub-surface volcanic or geothermal activity) in relatively high concentrations, without the dispersing effects of wind, it can collect in sheltered/pocketed locations below average ground level, causing animals located therein to be suffocated. Carrion feeders attracted to the carcasses are then also killed. Children have been killed in the same way near the city of Goma by emissions from the nearby volcano Mount Nyiragongo. The Swahili term for this phenomenon is .
Adaptation to increased concentrations of occurs in humans, including modified breathing and kidney bicarbonate production, in order to balance the effects of blood acidification (acidosis). Several studies suggested that 2.0 percent inspired concentrations could be used for closed air spaces (e.g. a submarine) since the adaptation is physiological and reversible, as deterioration in performance or in normal physical activity does not happen at this level of exposure for five days. Yet, other studies show a decrease in cognitive function even at much lower levels. Also, with ongoing respiratory acidosis, adaptation or compensatory mechanisms will be unable to reverse the condition.
Below 1%
There are few studies of the health effects of long-term continuous exposure on humans and animals at levels below 1%. Occupational exposure limits have been set in the United States at 0.5% (5000 ppm) for an eight-hour period. At this concentration, International Space Station crew experienced headaches, lethargy, mental slowness, emotional irritation, and sleep disruption. Studies in animals at 0.5% have demonstrated kidney calcification and bone loss after eight weeks of exposure. A study of humans exposed in 2.5 hour sessions demonstrated significant negative effects on cognitive abilities at concentrations as low as 0.1% (1000ppm) likely due to induced increases in cerebral blood flow. Another study observed a decline in basic activity level and information usage at 1000 ppm, when compared to 500 ppm.
However a review of the literature found that a reliable subset of studies on the phenomenon of carbon dioxide induced cognitive impairment to only show a small effect on high-level decision making (for concentrations below 5000 ppm). Most of the studies were confounded by inadequate study designs, environmental comfort, uncertainties in exposure doses and differing cognitive assessments used. Similarly a study on the effects of the concentration of in motorcycle helmets has been criticized for having dubious methodology in not noting the self-reports of motorcycle riders and taking measurements using mannequins. Further when normal motorcycle conditions were achieved (such as highway or city speeds) or the visor was raised the concentration of declined to safe levels (0.2%).
Ventilation
Poor ventilation is one of the main causes of excessive concentrations in closed spaces, leading to poor indoor air quality. Carbon dioxide differential above outdoor concentrations at steady state conditions (when the occupancy and ventilation system operation are sufficiently long that concentration has stabilized) are sometimes used to estimate ventilation rates per person. Higher concentrations are associated with occupant health, comfort and performance degradation. ASHRAE Standard 62.1–2007 ventilation rates may result in indoor concentrations up to 2,100 ppm above ambient outdoor conditions. Thus if the outdoor concentration is 400 ppm, indoor concentrations may reach 2,500 ppm with ventilation rates that meet this industry consensus standard. Concentrations in poorly ventilated spaces can be found even higher than this (range of 3,000 or 4,000 ppm).
Miners, who are particularly vulnerable to gas exposure due to insufficient ventilation, referred to mixtures of carbon dioxide and nitrogen as "blackdamp", "choke damp" or "stythe". Before more effective technologies were developed, miners would frequently monitor for dangerous levels of blackdamp and other gases in mine shafts by bringing a caged canary with them as they worked. The canary is more sensitive to asphyxiant gases than humans, and as it became unconscious would stop singing and fall off its perch. The Davy lamp could also detect high levels of blackdamp (which sinks, and collects near the floor) by burning less brightly, while methane, another suffocating gas and explosion risk, would make the lamp burn more brightly.
In February 2020, three people died from suffocation at a party in Moscow when dry ice (frozen ) was added to a swimming pool to cool it down. A similar accident occurred in 2018 when a woman died from fumes emanating from the large amount of dry ice she was transporting in her car.
Indoor air
Humans spend more and more time in a confined atmosphere (around 80-90% of the time in a building or vehicle). According to the French Agency for Food, Environmental and Occupational Health & Safety (ANSES) and various actors in France, the rate in the indoor air of buildings (linked to human or animal occupancy and the presence of combustion installations), weighted by air renewal, is "usually between about 350 and 2,500 ppm".
In homes, schools, nurseries and offices, there are no systematic relationships between the levels of and other pollutants, and indoor is statistically not a good predictor of pollutants linked to outdoor road (or air, etc.) traffic. is the parameter that changes the fastest (with hygrometry and oxygen levels when humans or animals are gathered in a closed or poorly ventilated room). In poor countries, many open hearths are sources of and CO emitted directly into the living environment.
Outdoor areas with elevated concentrations
Local concentrations of carbon dioxide can reach high values near strong sources, especially those that are isolated by surrounding terrain. At the Bossoleto hot spring near Rapolano Terme in Tuscany, Italy, situated in a bowl-shaped depression about in diameter, concentrations of rise to above 75% overnight, sufficient to kill insects and small animals. After sunrise the gas is dispersed by convection. High concentrations of produced by disturbance of deep lake water saturated with are thought to have caused 37 fatalities at Lake Monoun, Cameroon in 1984 and 1700 casualties at Lake Nyos, Cameroon in 1986.
Human physiology
Content
The body produces approximately of carbon dioxide per day per person, containing of carbon. In humans, this carbon dioxide is carried through the venous system and is breathed out through the lungs, resulting in lower concentrations in the arteries. The carbon dioxide content of the blood is often given as the partial pressure, which is the pressure which carbon dioxide would have had if it alone occupied the volume. In humans, the blood carbon dioxide contents are shown in the adjacent table.
Transport in the blood
is carried in blood in three different ways. Exact percentages vary between arterial and venous blood.
Majority (about 70% to 80%) is converted to bicarbonate ions by the enzyme carbonic anhydrase in the red blood cells, by the reaction:
5–10% is dissolved in blood plasma
5–10% is bound to hemoglobin as carbamino compounds
Hemoglobin, the main oxygen-carrying molecule in red blood cells, carries both oxygen and carbon dioxide. However, the bound to hemoglobin does not bind to the same site as oxygen. Instead, it combines with the N-terminal groups on the four globin chains. However, because of allosteric effects on the hemoglobin molecule, the binding of decreases the amount of oxygen that is bound for a given partial pressure of oxygen. This is known as the Haldane Effect, and is important in the transport of carbon dioxide from the tissues to the lungs. Conversely, a rise in the partial pressure of or a lower pH will cause offloading of oxygen from hemoglobin, which is known as the Bohr effect.
Regulation of respiration
Carbon dioxide is one of the mediators of local autoregulation of blood supply. If its concentration is high, the capillaries expand to allow a greater blood flow to that tissue.
Bicarbonate ions are crucial for regulating blood pH. A person's breathing rate influences the level of in their blood. Breathing that is too slow or shallow causes respiratory acidosis, while breathing that is too rapid leads to hyperventilation, which can cause respiratory alkalosis.
Although the body requires oxygen for metabolism, low oxygen levels normally do not stimulate breathing. Rather, breathing is stimulated by higher carbon dioxide levels. As a result, breathing low-pressure air or a gas mixture with no oxygen at all (such as pure nitrogen) can lead to loss of consciousness without ever experiencing air hunger. This is especially perilous for high-altitude fighter pilots. It is also why flight attendants instruct passengers, in case of loss of cabin pressure, to apply the oxygen mask to themselves first before helping others; otherwise, one risks losing consciousness.
The respiratory centers try to maintain an arterial pressure of 40 mmHg. With intentional hyperventilation, the content of arterial blood may be lowered to 10–20 mmHg (the oxygen content of the blood is little affected), and the respiratory drive is diminished. This is why one can hold one's breath longer after hyperventilating than without hyperventilating. This carries the risk that unconsciousness may result before the need to breathe becomes overwhelming, which is why hyperventilation is particularly dangerous before free diving.
Concentrations and role in the environment
Atmosphere
Oceans
Ocean acidification
Carbon dioxide dissolves in the ocean to form carbonic acid (), bicarbonate (), and carbonate (). There is about fifty times as much carbon dioxide dissolved in the oceans as exists in the atmosphere. The oceans act as an enormous carbon sink, and have taken up about a third of emitted by human activity.
Hydrothermal vents
Carbon dioxide is also introduced into the oceans through hydrothermal vents. The Champagne hydrothermal vent, found at the Northwest Eifuku volcano in the Mariana Trench, produces almost pure liquid carbon dioxide, one of only two known sites in the world as of 2004, the other being in the Okinawa Trough. The finding of a submarine lake of liquid carbon dioxide in the Okinawa Trough was reported in 2006.
Sources
The burning of fossil fuels for energy produces 36.8 billion tonnes of per year as of 2023. Nearly all of this goes into the atmosphere, where approximately half is subsequently absorbed into natural carbon sinks. Less than 1% of produced annually is put to commercial use.
Biological processes
Carbon dioxide is a by-product of the fermentation of sugar in the brewing of beer, whisky and other alcoholic beverages and in the production of bioethanol. Yeast metabolizes sugar to produce and ethanol, also known as alcohol, as follows:
All aerobic organisms produce when they oxidize carbohydrates, fatty acids, and proteins. The large number of reactions involved are exceedingly complex and not described easily. Refer to cellular respiration, anaerobic respiration and photosynthesis. The equation for the respiration of glucose and other monosaccharides is:
Anaerobic organisms decompose organic material producing methane and carbon dioxide together with traces of other compounds. Regardless of the type of organic material, the production of gases follows well defined kinetic pattern. Carbon dioxide comprises about 40–45% of the gas that emanates from decomposition in landfills (termed "landfill gas"). Most of the remaining 50–55% is methane.
Combustion
The combustion of all carbon-based fuels, such as methane (natural gas), petroleum distillates (gasoline, diesel, kerosene, propane), coal, wood and generic organic matter produces carbon dioxide and, except in the case of pure carbon, water. As an example, the chemical reaction between methane and oxygen:
Iron is reduced from its oxides with coke in a blast furnace, producing pig iron and carbon dioxide:
By-product from hydrogen production
Carbon dioxide is a byproduct of the industrial production of hydrogen by steam reforming and the water gas shift reaction in ammonia production. These processes begin with the reaction of water and natural gas (mainly methane).
Thermal decomposition of limestone
It is produced by thermal decomposition of limestone, by heating (calcining) at about , in the manufacture of quicklime (calcium oxide, CaO), a compound that has many industrial uses:
Acids liberate from most metal carbonates. Consequently, it may be obtained directly from natural carbon dioxide springs, where it is produced by the action of acidified water on limestone or dolomite. The reaction between hydrochloric acid and calcium carbonate (limestone or chalk) is shown below:
The carbonic acid () then decomposes to water and :
Such reactions are accompanied by foaming or bubbling, or both, as the gas is released. They have widespread uses in industry because they can be used to neutralize waste acid streams.
Commercial uses
Around 230 Mt of are used each year, mostly in the fertiliser industry for urea production (130 million tonnes) and in the oil and gas industry for enhanced oil recovery (70 to 80 million tonnes). Other commercial applications include food and beverage production, metal fabrication, cooling, fire suppression and stimulating plant growth in greenhouses.
Technology exists to capture CO2 from industrial flue gas or from the air. Research is ongoing on ways to use captured CO2 in products and some of these processes have been deployed commercially. However, the potential to use products is very small compared to the total volume of CO2 that could foreseeably be captured. The vast majority of captured CO2 is considered a waste product and sequestered in underground geologic formations.
Precursor to chemicals
In the chemical industry, carbon dioxide is mainly consumed as an ingredient in the production of urea, with a smaller fraction being used to produce methanol and a range of other products. Some carboxylic acid derivatives such as sodium salicylate are prepared using by the Kolbe–Schmitt reaction.
Captured CO2 could be to produce methanol or electrofuels. To be carbon-neutral, the CO2 would need to come from bioenergy production or direct air capture.
Fossil fuel recovery
Carbon dioxide is used in enhanced oil recovery where it is injected into or adjacent to producing oil wells, usually under supercritical conditions, when it becomes miscible with the oil. This approach can increase original oil recovery by reducing residual oil saturation by 7–23% additional to primary extraction. It acts as both a pressurizing agent and, when dissolved into the underground crude oil, significantly reduces its viscosity, and changing surface chemistry enabling the oil to flow more rapidly through the reservoir to the removal well.
Most CO2 injected in CO2-EOR projects comes from naturally occurring underground CO2 deposits. Some CO2 used in EOR is captured from industrial facilities such as natural gas processing plants, using carbon capture technology and transported to the oilfield in pipelines.
Agriculture
Plants require carbon dioxide to conduct photosynthesis. The atmospheres of greenhouses may (if of large size, must) be enriched with additional to sustain and increase the rate of plant growth. At very high concentrations (100 times atmospheric concentration, or greater), carbon dioxide can be toxic to animal life, so raising the concentration to 10,000 ppm (1%) or higher for several hours will eliminate pests such as whiteflies and spider mites in a greenhouse. Some plants respond more favorably to rising carbon dioxide concentrations than others, which can lead to vegetation regime shifts like woody plant encroachment.
Foods
Carbon dioxide is a food additive used as a propellant and acidity regulator in the food industry. It is approved for usage in the EU (listed as E number E290), US, Australia and New Zealand (listed by its INS number 290).
A candy called Pop Rocks is pressurized with carbon dioxide gas at about . When placed in the mouth, it dissolves (just like other hard candy) and releases the gas bubbles with an audible pop.
Leavening agents cause dough to rise by producing carbon dioxide. Baker's yeast produces carbon dioxide by fermentation of sugars within the dough, while chemical leaveners such as baking powder and baking soda release carbon dioxide when heated or if exposed to acids.
Beverages
Carbon dioxide is used to produce carbonated soft drinks and soda water. Traditionally, the carbonation of beer and sparkling wine came about through natural fermentation, but many manufacturers carbonate these drinks with carbon dioxide recovered from the fermentation process. In the case of bottled and kegged beer, the most common method used is carbonation with recycled carbon dioxide. With the exception of British real ale, draught beer is usually transferred from kegs in a cold room or cellar to dispensing taps on the bar using pressurized carbon dioxide, sometimes mixed with nitrogen.
The taste of soda water (and related taste sensations in other carbonated beverages) is an effect of the dissolved carbon dioxide rather than the bursting bubbles of the gas. Carbonic anhydrase 4 converts carbon dioxide to carbonic acid leading to a sour taste, and also the dissolved carbon dioxide induces a somatosensory response.
Winemaking
Carbon dioxide in the form of dry ice is often used during the cold soak phase in winemaking to cool clusters of grapes quickly after picking to help prevent spontaneous fermentation by wild yeast. The main advantage of using dry ice over water ice is that it cools the grapes without adding any additional water that might decrease the sugar concentration in the grape must, and thus the alcohol concentration in the finished wine. Carbon dioxide is also used to create a hypoxic environment for carbonic maceration, the process used to produce Beaujolais wine.
Carbon dioxide is sometimes used to top up wine bottles or other storage vessels such as barrels to prevent oxidation, though it has the problem that it can dissolve into the wine, making a previously still wine slightly fizzy. For this reason, other gases such as nitrogen or argon are preferred for this process by professional wine makers.
Stunning animals
Carbon dioxide is often used to "stun" animals before slaughter. "Stunning" may be a misnomer, as the animals are not knocked out immediately and may suffer distress.
Inert gas
Carbon dioxide is one of the most commonly used compressed gases for pneumatic (pressurized gas) systems in portable pressure tools. Carbon dioxide is also used as an atmosphere for welding, although in the welding arc, it reacts to oxidize most metals. Use in the automotive industry is common despite significant evidence that welds made in carbon dioxide are more brittle than those made in more inert atmospheres. When used for MIG welding, use is sometimes referred to as MAG welding, for Metal Active Gas, as can react at these high temperatures. It tends to produce a hotter puddle than truly inert atmospheres, improving the flow characteristics. Although, this may be due to atmospheric reactions occurring at the puddle site. This is usually the opposite of the desired effect when welding, as it tends to embrittle the site, but may not be a problem for general mild steel welding, where ultimate ductility is not a major concern.
Carbon dioxide is used in many consumer products that require pressurized gas because it is inexpensive and nonflammable, and because it undergoes a phase transition from gas to liquid at room temperature at an attainable pressure of approximately , allowing far more carbon dioxide to fit in a given container than otherwise would. Life jackets often contain canisters of pressured carbon dioxide for quick inflation. Aluminium capsules of are also sold as supplies of compressed gas for air guns, paintball markers/guns, inflating bicycle tires, and for making carbonated water. High concentrations of carbon dioxide can also be used to kill pests. Liquid carbon dioxide is used in supercritical drying of some food products and technological materials, in the preparation of specimens for scanning electron microscopy and in the decaffeination of coffee beans.
Fire extinguisher
Carbon dioxide can be used to extinguish flames by flooding the environment around the flame with the gas. It does not itself react to extinguish the flame, but starves the flame of oxygen by displacing it. Some fire extinguishers, especially those designed for electrical fires, contain liquid carbon dioxide under pressure. Carbon dioxide extinguishers work well on small flammable liquid and electrical fires, but not on ordinary combustible fires, because they do not cool the burning substances significantly, and when the carbon dioxide disperses, they can catch fire upon exposure to atmospheric oxygen. They are mainly used in server rooms.
Carbon dioxide has also been widely used as an extinguishing agent in fixed fire-protection systems for local application of specific hazards and total flooding of a protected space. International Maritime Organization standards recognize carbon dioxide systems for fire protection of ship holds and engine rooms. Carbon dioxide-based fire-protection systems have been linked to several deaths, because it can cause suffocation in sufficiently high concentrations. A review of systems identified 51 incidents between 1975 and the date of the report (2000), causing 72 deaths and 145 injuries.
Supercritical as solvent
Liquid carbon dioxide is a good solvent for many lipophilic organic compounds and is used to decaffeinate coffee. Carbon dioxide has attracted attention in the pharmaceutical and other chemical processing industries as a less toxic alternative to more traditional solvents such as organochlorides. It is also used by some dry cleaners for this reason. It is used in the preparation of some aerogels because of the properties of supercritical carbon dioxide.
Refrigerant
Liquid and solid carbon dioxide are important refrigerants, especially in the food industry, where they are employed during the transportation and storage of ice cream and other frozen foods. Solid carbon dioxide is called "dry ice" and is used for small shipments where refrigeration equipment is not practical. Solid carbon dioxide is always below at regular atmospheric pressure, regardless of the air temperature.
Liquid carbon dioxide (industry nomenclature R744 or R-744) was used as a refrigerant prior to the use of dichlorodifluoromethane (R12, a chlorofluorocarbon (CFC) compound). might enjoy a renaissance because one of the main substitutes to CFCs, 1,1,1,2-tetrafluoroethane (R134a, a hydrofluorocarbon (HFC) compound) contributes to climate change more than does. physical properties are highly favorable for cooling, refrigeration, and heating purposes, having a high volumetric cooling capacity. Due to the need to operate at pressures of up to , systems require highly mechanically resistant reservoirs and components that have already been developed for mass production in many sectors. In automobile air conditioning, in more than 90% of all driving conditions for latitudes higher than 50°, (R744) operates more efficiently than systems using HFCs (e.g., R134a). Its environmental advantages (GWP of 1, non-ozone depleting, non-toxic, non-flammable) could make it the future working fluid to replace current HFCs in cars, supermarkets, and heat pump water heaters, among others. Coca-Cola has fielded -based beverage coolers and the U.S. Army is interested in refrigeration and heating technology.
Minor uses
Carbon dioxide is the lasing medium in a carbon-dioxide laser, which is one of the earliest type of lasers.
Carbon dioxide can be used as a means of controlling the pH of swimming pools, by continuously adding gas to the water, thus keeping the pH from rising. Among the advantages of this is the avoidance of handling (more hazardous) acids. Similarly, it is also used in the maintaining reef aquaria, where it is commonly used in calcium reactors to temporarily lower the pH of water being passed over calcium carbonate in order to allow the calcium carbonate to dissolve into the water more freely, where it is used by some corals to build their skeleton.
Used as the primary coolant in the British advanced gas-cooled reactor for nuclear power generation.
Carbon dioxide induction is commonly used for the euthanasia of laboratory research animals. Methods to administer include placing animals directly into a closed, prefilled chamber containing , or exposure to a gradually increasing concentration of . The American Veterinary Medical Association's 2020 guidelines for carbon dioxide induction state that a displacement rate of 30–70% of the chamber or cage volume per minute is optimal for the humane euthanasia of small rodents. Percentages of vary for different species, based on identified optimal percentages to minimize distress.
Carbon dioxide is also used in several related cleaning and surface-preparation techniques.
History of discovery
Carbon dioxide was the first gas to be described as a discrete substance. In about 1640, the Flemish chemist Jan Baptist van Helmont observed that when he burned charcoal in a closed vessel, the mass of the resulting ash was much less than that of the original charcoal. His interpretation was that the rest of the charcoal had been transmuted into an invisible substance he termed a "gas" (from Greek "chaos") or "wild spirit" (spiritus sylvestris).
The properties of carbon dioxide were further studied in the 1750s by the Scottish physician Joseph Black. He found that limestone (calcium carbonate) could be heated or treated with acids to yield a gas he called "fixed air". He observed that the fixed air was denser than air and supported neither flame nor animal life. Black also found that when bubbled through limewater (a saturated aqueous solution of calcium hydroxide), it would precipitate calcium carbonate. He used this phenomenon to illustrate that carbon dioxide is produced by animal respiration and microbial fermentation. In 1772, English chemist Joseph Priestley published a paper entitled Impregnating Water with Fixed Air in which he described a process of dripping sulfuric acid (or oil of vitriol as Priestley knew it) on chalk in order to produce carbon dioxide, and forcing the gas to dissolve by agitating a bowl of water in contact with the gas.
Carbon dioxide was first liquefied (at elevated pressures) in 1823 by Humphry Davy and Michael Faraday. The earliest description of solid carbon dioxide (dry ice) was given by the French inventor Adrien-Jean-Pierre Thilorier, who in 1835 opened a pressurized container of liquid carbon dioxide, only to find that the cooling produced by the rapid evaporation of the liquid yielded a "snow" of solid .
Carbon dioxide in combination with nitrogen was known from earlier times as Blackdamp, stythe or choke damp. Along with the other types of damp it was encountered in mining operations and well sinking. Slow oxidation of coal and biological processes replaced the oxygen to create a suffocating mixture of nitrogen and carbon dioxide.
See also
(from the atmosphere)
(early work on and climate change)
List of countries by carbon dioxide emissions
List of least carbon efficient power stations
NASA's
Notes
References
External links
Current global map of carbon dioxide concentration
CDC – NIOSH Pocket Guide to Chemical Hazards – Carbon Dioxide
Trends in Atmospheric Carbon Dioxide (NOAA)
The rediscovery of CO2: History, What is Shecco? - as refrigerant
Acid anhydrides
Acidic oxides
Coolants
Fire suppression agents
Greenhouse gases
Household chemicals
Inorganic solvents
Laser gain media
Nuclear reactor coolants
Oxocarbons
Propellants
Refrigerants
Gaseous signaling molecules
E-number additives
Triatomic molecules | Carbon dioxide | [
"Physics",
"Chemistry",
"Environmental_science"
] | 8,921 | [
"Molecules",
"Environmental chemistry",
"Signal transduction",
"Gaseous signaling molecules",
"Triatomic molecules",
"Greenhouse gases",
"Carbon dioxide",
"Matter"
] |
5,910 | https://en.wikipedia.org/wiki/Cyanide | In chemistry, cyanide () is a chemical compound that contains a functional group. This group, known as the cyano group, consists of a carbon atom triple-bonded to a nitrogen atom.
In inorganic cyanides, the cyanide group is present as the cyanide anion . This anion is extremely poisonous. Soluble salts such as sodium cyanide (NaCN) and potassium cyanide (KCN) are highly toxic. Hydrocyanic acid, also known as hydrogen cyanide, or HCN, is a highly volatile liquid that is produced on a large scale industrially. It is obtained by acidification of cyanide salts.
Organic cyanides are usually called nitriles. In nitriles, the group is linked by a single covalent bond to carbon. For example, in acetonitrile (), the cyanide group is bonded to methyl (). Although nitriles generally do not release cyanide ions, the cyanohydrins do and are thus toxic.
Bonding
The cyanide ion is isoelectronic with carbon monoxide and with molecular nitrogen N≡N. A triple bond exists between C and N. The negative charge is concentrated on carbon C.
Occurrence
In nature
Cyanides are produced by certain bacteria, fungi, and algae. It is an antifeedant in a number of plants. Cyanides are found in substantial amounts in certain seeds and fruit stones, e.g., those of bitter almonds, apricots, apples, and peaches. Chemical compounds that can release cyanide are known as cyanogenic compounds. In plants, cyanides are usually bound to sugar molecules in the form of cyanogenic glycosides and defend the plant against herbivores. Cassava roots (also called manioc), an important potato-like food grown in tropical countries (and the base from which tapioca is made), also contain cyanogenic glycosides.
The Madagascar bamboo Cathariostachys madagascariensis produces cyanide as a deterrent to grazing. In response, the golden bamboo lemur, which eats the bamboo, has developed a high tolerance to cyanide.
The hydrogenase enzymes contain cyanide ligands attached to iron in their active sites. The biosynthesis of cyanide in the NiFe hydrogenases proceeds from carbamoyl phosphate, which converts to cysteinyl thiocyanate, the donor.
Interstellar medium
The cyanide radical •CN has been identified in interstellar space. Cyanogen, , is used to measure the temperature of interstellar gas clouds.
Pyrolysis and combustion product
Hydrogen cyanide is produced by the combustion or pyrolysis of certain materials under oxygen-deficient conditions. For example, it can be detected in the exhaust of internal combustion engines and tobacco smoke. Certain plastics, especially those derived from acrylonitrile, release hydrogen cyanide when heated or burnt.
Organic derivatives
In IUPAC nomenclature, organic compounds that have a functional group are called nitriles. An example of a nitrile is acetonitrile, . Nitriles usually do not release cyanide ions. A functional group with a hydroxyl and cyanide bonded to the same carbon atom is called cyanohydrin (). Unlike nitriles, cyanohydrins do release poisonous hydrogen cyanide.
Reactions
Protonation
Cyanide is basic. The pKa of hydrogen cyanide is 9.21. Thus, addition of acids stronger than hydrogen cyanide to solutions of cyanide salts releases hydrogen cyanide.
Hydrolysis
Cyanide is unstable in water, but the reaction is slow until about 170 °C. It undergoes hydrolysis to give ammonia and formate, which are far less toxic than cyanide:
Cyanide hydrolase is an enzyme that catalyzes this reaction.
Alkylation
Because of the cyanide anion's high nucleophilicity, cyano groups are readily introduced into organic molecules by displacement of a halide group (e.g., the chloride on methyl chloride). In general, organic cyanides are called nitriles. In organic synthesis, cyanide is a C-1 synthon; i.e., it can be used to lengthen a carbon chain by one, while retaining the ability to be functionalized.
Redox
The cyanide ion is a reductant and is oxidized by strong oxidizing agents such as molecular chlorine (), hypochlorite (), and hydrogen peroxide (). These oxidizers are used to destroy cyanides in effluents from gold mining.
Metal complexation
The cyanide anion reacts with transition metals to form M-CN bonds. This reaction is the basis of cyanide's toxicity. The high affinities of metals for this anion can be attributed to its negative charge, compactness, and ability to engage in π-bonding.
Among the most important cyanide coordination compounds are the potassium ferrocyanide and the pigment Prussian blue, which are both essentially nontoxic due to the tight binding of the cyanides to a central iron atom.
Prussian blue was first accidentally made around 1706, by heating substances containing iron and carbon and nitrogen, and other cyanides made subsequently (and named after it). Among its many uses, Prussian blue gives the blue color to blueprints, bluing, and cyanotypes.
Manufacture
The principal process used to manufacture cyanides is the Andrussow process in which gaseous hydrogen cyanide is produced from methane and ammonia in the presence of oxygen and a platinum catalyst.
Sodium cyanide, the precursor to most cyanides, is produced by treating hydrogen cyanide with sodium hydroxide:
Toxicity
Among the most toxic cyanides are hydrogen cyanide (), sodium cyanide (), potassium cyanide (), and calcium cyanide (). The cyanide anion is an inhibitor of the enzyme cytochrome c oxidase (also known as aa3), the fourth complex of the electron transport chain found in the inner membrane of the mitochondria of eukaryotic cells. It attaches to the iron within this protein. The binding of cyanide to this enzyme prevents transport of electrons from cytochrome c to oxygen. As a result, the electron transport chain is disrupted, meaning that the cell can no longer aerobically produce ATP for energy. Tissues that depend highly on aerobic respiration, such as the central nervous system and the heart, are particularly affected. This is an example of histotoxic hypoxia.
The most hazardous compound is hydrogen cyanide, which is a gas and kills by inhalation. For this reason, working with hydrogen cyanide requires wearing an air respirator supplied by an external oxygen source. Hydrogen cyanide is produced by adding acid to a solution containing a cyanide salt. Alkaline solutions of cyanide are safer to use because they do not evolve hydrogen cyanide gas. Hydrogen cyanide may be produced in the combustion of polyurethanes; for this reason, polyurethanes are not recommended for use in domestic and aircraft furniture. Oral ingestion of a small quantity of solid cyanide or a cyanide solution of as little as 200 mg, or exposure to airborne cyanide of 270 ppm, is sufficient to cause death within minutes.
Organic nitriles do not readily release cyanide ions, and so have low toxicities. By contrast, compounds such as trimethylsilyl cyanide readily release HCN or the cyanide ion upon contact with water.
Antidote
Hydroxocobalamin reacts with cyanide to form cyanocobalamin, which can be safely eliminated by the kidneys. This method has the advantage of avoiding the formation of methemoglobin (see below). This antidote kit is sold under the brand name Cyanokit and was approved by the U.S. FDA in 2006.
An older cyanide antidote kit included administration of three substances: amyl nitrite pearls (administered by inhalation), sodium nitrite, and sodium thiosulfate. The goal of the antidote was to generate a large pool of ferric iron () to compete for cyanide with cytochrome a3 (so that cyanide will bind to the antidote rather than the enzyme). The nitrites oxidize hemoglobin to methemoglobin, which competes with cytochrome oxidase for the cyanide ion. Cyanmethemoglobin is formed and the cytochrome oxidase enzyme is restored. The major mechanism to remove the cyanide from the body is by enzymatic conversion to thiocyanate by the mitochondrial enzyme rhodanese. Thiocyanate is a relatively non-toxic molecule and is excreted by the kidneys. To accelerate this detoxification, sodium thiosulfate is administered to provide a sulfur donor for rhodanese, needed in order to produce thiocyanate.
Sensitivity
Minimum risk levels (MRLs) may not protect for delayed health effects or health effects acquired following repeated sublethal exposure, such as hypersensitivity, asthma, or bronchitis. MRLs may be revised after sufficient data accumulates.
Applications
Mining
Cyanide is mainly produced for the mining of silver and gold: It helps dissolve these metals allowing separation from the other solids. In the cyanide process, finely ground high-grade ore is mixed with the cyanide (at a ratio of about 1:500 parts NaCN to ore); low-grade ores are stacked into heaps and sprayed with a cyanide solution (at a ratio of about 1:1000 parts NaCN to ore). The precious metals are complexed by the cyanide anions to form soluble derivatives, e.g., (dicyanoargentate(I)) and (dicyanoaurate(I)). Silver is less "noble" than gold and often occurs as the sulfide, in which case redox is not invoked (no is required). Instead, a displacement reaction occurs:
Ag2S + 4 NaCN + H2O -> 2 Na[Ag(CN)2] + NaSH + NaOH
4 Au + 8 NaCN + O2 + 2 H2O -> 4 Na[Au(CN)2] + 4 NaOH
The "pregnant liquor" containing these ions is separated from the solids, which are discarded to a tailing pond or spent heap, the recoverable gold having been removed. The metal is recovered from the "pregnant solution" by reduction with zinc dust or by adsorption onto activated carbon. This process can result in environmental and health problems. A number of environmental disasters have followed the overflow of tailing ponds at gold mines. Cyanide contamination of waterways has resulted in numerous cases of human and aquatic species mortality.
Aqueous cyanide is hydrolyzed rapidly, especially in sunlight. It can mobilize some heavy metals such as mercury if present. Gold can also be associated with arsenopyrite (FeAsS), which is similar to iron pyrite (fool's gold), wherein half of the sulfur atoms are replaced by arsenic. Gold-containing arsenopyrite ores are similarly reactive toward inorganic cyanide.
Industrial organic chemistry
The second major application of alkali metal cyanides (after mining) is in the production of CN-containing compounds, usually nitriles. Acyl cyanides are produced from acyl chlorides and cyanide. Cyanogen, cyanogen chloride, and the trimer cyanuric chloride are derived from alkali metal cyanides.
Medical uses
The cyanide compound sodium nitroprusside is used mainly in clinical chemistry to measure urine ketone bodies mainly as a follow-up to diabetic patients. On occasion, it is used in emergency medical situations to produce a rapid decrease in blood pressure in humans; it is also used as a vasodilator in vascular research. The cobalt in artificial vitamin B12 contains a cyanide ligand as an artifact of the purification process; this must be removed by the body before the vitamin molecule can be activated for biochemical use. During World War I, a copper cyanide compound was briefly used by Japanese physicians for the treatment of tuberculosis and leprosy.
Illegal fishing and poaching
Cyanides are illegally used to capture live fish near coral reefs for the aquarium and seafood markets. The practice is controversial, dangerous, and damaging but is driven by the lucrative exotic fish market.
Poachers in Africa have been known to use cyanide to poison waterholes, to kill elephants for their ivory.
Pest control
M44 cyanide devices are used in the United States to kill coyotes and other canids. Cyanide is also used for pest control in New Zealand, particularly for possums, an introduced marsupial that threatens the conservation of native species and spreads tuberculosis amongst cattle. Possums can become bait shy but the use of pellets containing the cyanide reduces bait shyness. Cyanide has been known to kill native birds, including the endangered kiwi. Cyanide is also effective for controlling the dama wallaby, another introduced marsupial pest in New Zealand. A licence is required to store, handle and use cyanide in New Zealand.
Cyanides are used as insecticides for fumigating ships. Cyanide salts are used for killing ants, and have in some places been used as rat poison (the less toxic poison arsenic is more common).
Niche uses
Potassium ferrocyanide is used to achieve a blue color on cast bronze sculptures during the final finishing stage of the sculpture. On its own, it will produce a very dark shade of blue and is often mixed with other chemicals to achieve the desired tint and hue. It is applied using a torch and paint brush while wearing the standard safety equipment used for any patina application: rubber gloves, safety glasses, and a respirator. The actual amount of cyanide in the mixture varies according to the recipes used by each foundry.
Cyanide is also used in jewelry-making and certain kinds of photography such as sepia toning.
Although usually thought to be toxic, cyanide and cyanohydrins increase germination in various plant species.
Human poisoning
Deliberate cyanide poisoning of humans has occurred many times throughout history.
Common salts such as sodium cyanide are involatile but water-soluble, so are poisonous by ingestion. Hydrogen cyanide is a gas, making it more indiscriminately dangerous, however it is lighter than air and rapidly disperses up into the atmosphere, which makes it ineffective as a chemical weapon.
Food additive
Because of the high stability of their complexation with iron, ferrocyanides (Sodium ferrocyanide E535, Potassium ferrocyanide E536, and Calcium ferrocyanide E538) do not decompose to lethal levels in the human body and are used in the food industry as, e.g., an anticaking agent in table salt.
Chemical tests for cyanide
Cyanide is quantified by potentiometric titration, a method widely used in gold mining. It can also be determined by titration with silver ion.
Some analyses begin with an air-purge of an acidified boiling solution, sweeping the vapors into a basic absorber solution. The cyanide salt absorbed in the basic solution is then analyzed.
Qualitative tests
Because of the notorious toxicity of cyanide, many methods have been investigated. Benzidine gives a blue coloration in the presence of ferricyanide. Iron(II) sulfate added to a solution of cyanide, such as the filtrate from the sodium fusion test, gives prussian blue. A solution of para-benzoquinone in DMSO reacts with inorganic cyanide to form a cyanophenol, which is fluorescent. Illumination with a UV light gives a green/blue glow if the test is positive.
References
External links
ATSDR medical management guidelines for cyanide poisoning (US)
HSE recommendations for first aid treatment of cyanide poisoning (UK)
Hydrogen cyanide and cyanides (CICAD 61)
IPCS/CEC Evaluation of antidotes for poisoning by cyanides
National Pollutant Inventory – Cyanide compounds fact sheet
Eating apple seeds is safe despite the small amount of cyanide
Toxicological Profile for Cyanide, U.S. Department of Health and Human Services, July 2006
Safety data (French)
Institut national de recherche et de sécurité (1997). "Cyanure d'hydrogène et solutions aqueuses". Fiche toxicologique n° 4, Paris: INRS, 5 pp. (PDF file, )
Institut national de recherche et de sécurité (1997). "Cyanure de sodium. Cyanure de potassium". Fiche toxicologique n° 111, Paris: INRS, 6 pp. (PDF file, )
Cyanides
Anions
Blood agents
Mitochondrial toxins
Nitrogen(−III) compounds
Toxicology | Cyanide | [
"Physics",
"Chemistry",
"Environmental_science"
] | 3,671 | [
"Matter",
"Toxicology",
"Anions",
"Chemical weapons",
"Blood agents",
"Ions"
] |
5,914 | https://en.wikipedia.org/wiki/Catalysis | Catalysis () is the increase in rate of a chemical reaction due to an added substance known as a catalyst (). Catalysts are not consumed by the reaction and remain unchanged after it. If the reaction is rapid and the catalyst recycles quickly, very small amounts of catalyst often suffice; mixing, surface area, and temperature are important factors in reaction rate. Catalysts generally react with one or more reactants to form intermediates that subsequently give the final reaction product, in the process of regenerating the catalyst.
The rate increase occurs because the catalyst allows the reaction to occur by an alternative mechanism which may be much faster than the non-catalyzed mechanism. However the non-catalyzed mechanism does remain possible, so that the total rate (catalyzed plus non-catalyzed) can only increase in the presence of the catalyst and never decrease.
Catalysis may be classified as either homogeneous, whose components are dispersed in the same phase (usually gaseous or liquid) as the reactant, or heterogeneous, whose components are not in the same phase. Enzymes and other biocatalysts are often considered as a third category.
Catalysis is ubiquitous in chemical industry of all kinds. Estimates are that 90% of all commercially produced chemical products involve catalysts at some stage in the process of their manufacture.
The term "catalyst" is derived from Greek , kataluein, meaning "loosen" or "untie". The concept of catalysis was invented by chemist Elizabeth Fulhame, based on her novel work in oxidation-reduction experiments.
General principles
Example
An illustrative example is the effect of catalysts to speed the decomposition of hydrogen peroxide into water and oxygen:
2 HO → 2 HO + O
This reaction proceeds because the reaction products are more stable than the starting compound, but this decomposition is so slow that hydrogen peroxide solutions are commercially available. In the presence of a catalyst such as manganese dioxide this reaction proceeds much more rapidly. This effect is readily seen by the effervescence of oxygen. The catalyst is not consumed in the reaction, and may be recovered unchanged and re-used indefinitely. Accordingly, manganese dioxide is said to catalyze this reaction. In living organisms, this reaction is catalyzed by enzymes (proteins that serve as catalysts) such as catalase.
Another example is the effect of catalysts on air pollution and reducing the amount of carbon monoxide. Development of active and selective catalysts for the conversion of carbon monoxide into desirable products is one of the most important roles of catalysts. Using catalysts for hydrogenation of carbon monoxide helps to remove this toxic gas and also attain useful materials.
Units
The SI derived unit for measuring the catalytic activity of a catalyst is the katal, which is quantified in moles per second. The productivity of a catalyst can be described by the turnover number (or TON) and the catalytic activity by the turn over frequency (TOF), which is the TON per time unit. The biochemical equivalent is the enzyme unit. For more information on the efficiency of enzymatic catalysis, see the article on enzymes.
Catalytic reaction mechanisms
In general, chemical reactions occur faster in the presence of a catalyst because the catalyst provides an alternative reaction mechanism (reaction pathway) having a lower activation energy than the non-catalyzed mechanism. In catalyzed mechanisms, the catalyst is regenerated.
As a simple example occurring in the gas phase, the reaction 2 SO2 + O2 → 2 SO3 can be catalyzed by adding nitric oxide. The reaction occurs in two steps:
2NO + O2 → 2NO2 (rate-determining)
NO2 + SO2 → NO + SO3 (fast)
The NO catalyst is regenerated. The overall rate is the rate of the slow step
v=2k1[NO]2[O2].
An example of heterogeneous catalysis is the reaction of oxygen and hydrogen on the surface of titanium dioxide (TiO, or titania) to produce water. Scanning tunneling microscopy showed that the molecules undergo adsorption and dissociation. The dissociated, surface-bound O and H atoms diffuse together. The intermediate reaction states are: HO, HO, then HO and the reaction product (water molecule dimers), after which the water molecule desorbs from the catalyst surface.
Reaction energetics
Catalysts enable pathways that differ from the uncatalyzed reactions. These pathways have lower activation energy. Consequently, more molecular collisions have the energy needed to reach the transition state. Hence, catalysts can enable reactions that would otherwise be blocked or slowed by a kinetic barrier. The catalyst may increase the reaction rate or selectivity, or enable the reaction at lower temperatures. This effect can be illustrated with an energy profile diagram.
In the catalyzed elementary reaction, catalysts do not change the extent of a reaction: they have no effect on the chemical equilibrium of a reaction. The ratio of the forward and the reverse reaction rates is unaffected (see also thermodynamics). The second law of thermodynamics describes why a catalyst does not change the chemical equilibrium of a reaction. Suppose there was such a catalyst that shifted an equilibrium. Introducing the catalyst to the system would result in a reaction to move to the new equilibrium, producing energy. Production of energy is a necessary result since reactions are spontaneous only if Gibbs free energy is produced, and if there is no energy barrier, there is no need for a catalyst. Then, removing the catalyst would also result in a reaction, producing energy; i.e. the addition and its reverse process, removal, would both produce energy. Thus, a catalyst that could change the equilibrium would be a perpetual motion machine, a contradiction to the laws of thermodynamics. Thus, catalysts do not alter the equilibrium constant. (A catalyst can however change the equilibrium concentrations by reacting in a subsequent step. It is then consumed as the reaction proceeds, and thus it is also a reactant. Illustrative is the base-catalyzed hydrolysis of esters, where the produced carboxylic acid immediately reacts with the base catalyst and thus the reaction equilibrium is shifted towards hydrolysis.)
The catalyst stabilizes the transition state more than it stabilizes the starting material. It decreases the kinetic barrier by decreasing the difference in energy between starting material and the transition state. It does not change the energy difference between starting materials and products (thermodynamic barrier), or the available energy (this is provided by the environment as heat or light).
Related concepts
Some so-called catalysts are really precatalysts. Precatalysts convert to catalysts in the reaction. For example, Wilkinson's catalyst RhCl(PPh) loses one triphenylphosphine ligand before entering the true catalytic cycle. Precatalysts are easier to store but are easily activated in situ. Because of this preactivation step, many catalytic reactions involve an induction period.
In cooperative catalysis, chemical species that improve catalytic activity are called cocatalysts or promoters.
In tandem catalysis two or more different catalysts are coupled in a one-pot reaction.
In autocatalysis, the catalyst is a product of the overall reaction, in contrast to all other types of catalysis considered in this article. The simplest example of autocatalysis is a reaction of type A + B → 2 B, in one or in several steps. The overall reaction is just A → B, so that B is a product. But since B is also a reactant, it may be present in the rate equation and affect the reaction rate. As the reaction proceeds, the concentration of B increases and can accelerate the reaction as a catalyst. In effect, the reaction accelerates itself or is autocatalyzed. An example is the hydrolysis of an ester such as aspirin to a carboxylic acid and an alcohol. In the absence of added acid catalysts, the carboxylic acid product catalyzes the hydrolysis.
Switchable catalysis refers to a type of catalysis where the catalyst can be toggled between different ground states possessing distinct reactivity, typically by applying an external stimulus. This ability to reversibly switch the catalyst allows for spatiotemporal control over catalytic activity and selectivity. The external stimuli used to switch the catalyst can include changes in temperature, pH, light, electric fields, or the addition of chemical agents.
A true catalyst can work in tandem with a sacrificial catalyst. The true catalyst is consumed in the elementary reaction and turned into a deactivated form.
The sacrificial catalyst regenerates the true catalyst for another cycle. The sacrificial catalyst is consumed in the reaction, and as such, it is not really a catalyst, but a reagent. For example, osmium tetroxide (OsO4) is a good reagent for dihydroxylation, but it is highly toxic and expensive. In Upjohn dihydroxylation, the sacrificial catalyst N-methylmorpholine N-oxide (NMMO) regenerates OsO4, and only catalytic quantities of OsO4 are needed.
Classification
Catalysis may be classified as either homogeneous or heterogeneous. A homogeneous catalysis is one whose components are dispersed in the same phase (usually gaseous or liquid) as the reactant's molecules. A heterogeneous catalysis is one where the reaction components are not in the same phase. Enzymes and other biocatalysts are often considered as a third category. Similar mechanistic principles apply to heterogeneous, homogeneous, and biocatalysis.
Heterogeneous catalysis
Heterogeneous catalysts act in a different phase than the reactants. Most heterogeneous catalysts are solids that act on substrates in a liquid or gaseous reaction mixture. Important heterogeneous catalysts include zeolites, alumina, higher-order oxides, graphitic carbon, transition metal oxides, metals such as Raney nickel for hydrogenation, and vanadium(V) oxide for oxidation of sulfur dioxide into sulfur trioxide by the contact process.
Diverse mechanisms for reactions on surfaces are known, depending on how the adsorption takes place (Langmuir-Hinshelwood, Eley-Rideal, and Mars-van Krevelen). The total surface area of a solid has an important effect on the reaction rate. The smaller the catalyst particle size, the larger the surface area for a given mass of particles.
A heterogeneous catalyst has active sites, which are the atoms or crystal faces where the substrate actually binds. Active sites are atoms but are often described as a facet (edge, surface, step, etc.) of a solid. Most of the volume but also most of the surface of a heterogeneous catalyst may be catalytically inactive. Finding out the nature of the active site is technically challenging.
For example, the catalyst for the Haber process for the synthesis of ammonia from nitrogen and hydrogen is often described as iron. But detailed studies and many optimizations have led to catalysts that are mixtures of iron-potassium-calcium-aluminum-oxide. The reacting gases adsorb onto active sites on the iron particles. Once physically adsorbed, the reagents partially or wholly dissociate and form new bonds. In this way the particularly strong triple bond in nitrogen is broken, which would be extremely uncommon in the gas phase due to its high activation energy. Thus, the activation energy of the overall reaction is lowered, and the rate of reaction increases. Another place where a heterogeneous catalyst is applied is in the oxidation of sulfur dioxide on vanadium(V) oxide for the production of sulfuric acid. Many heterogeneous catalysts are in fact nanomaterials.
Heterogeneous catalysts are typically "supported", which means that the catalyst is dispersed on a second material that enhances the effectiveness or minimizes its cost. Supports prevent or minimize agglomeration and sintering of small catalyst particles, exposing more surface area, thus catalysts have a higher specific activity (per gram) on support. Sometimes the support is merely a surface on which the catalyst is spread to increase the surface area. More often, the support and the catalyst interact, affecting the catalytic reaction. Supports can also be used in nanoparticle synthesis by providing sites for individual molecules of catalyst to chemically bind. Supports are porous materials with a high surface area, most commonly alumina, zeolites, or various kinds of activated carbon. Specialized supports include silicon dioxide, titanium dioxide, calcium carbonate, and barium sulfate.
Electrocatalysts
In the context of electrochemistry, specifically in fuel cell engineering, various metal-containing catalysts are used to enhance the rates of the half reactions that comprise the fuel cell. One common type of fuel cell electrocatalyst is based upon nanoparticles of platinum that are supported on slightly larger carbon particles. When in contact with one of the electrodes in a fuel cell, this platinum increases the rate of oxygen reduction either to water or to hydroxide or hydrogen peroxide.
Homogeneous catalysis
Homogeneous catalysts function in the same phase as the reactants. Typically homogeneous catalysts are dissolved in a solvent with the substrates. One example of homogeneous catalysis involves the influence of H on the esterification of carboxylic acids, such as the formation of methyl acetate from acetic acid and methanol. High-volume processes requiring a homogeneous catalyst include hydroformylation, hydrosilylation, hydrocyanation. For inorganic chemists, homogeneous catalysis is often synonymous with organometallic catalysts. Many homogeneous catalysts are however not organometallic, illustrated by the use of cobalt salts that catalyze the oxidation of p-xylene to terephthalic acid.
Organocatalysis
Whereas transition metals sometimes attract most of the attention in the study of catalysis, small organic molecules without metals can also exhibit catalytic properties, as is apparent from the fact that many enzymes lack transition metals. Typically, organic catalysts require a higher loading (amount of catalyst per unit amount of reactant, expressed in mol% amount of substance) than transition metal(-ion)-based catalysts, but these catalysts are usually commercially available in bulk, helping to lower costs. In the early 2000s, these organocatalysts were considered "new generation" and are competitive to traditional metal(-ion)-containing catalysts.
Organocatalysts are supposed to operate akin to metal-free enzymes utilizing, e.g., non-covalent interactions such as hydrogen bonding. The discipline organocatalysis is divided into the application of covalent (e.g., proline, DMAP) and non-covalent (e.g., thiourea organocatalysis) organocatalysts referring to the preferred catalyst-substrate binding and interaction, respectively. The Nobel Prize in Chemistry 2021 was awarded jointly to Benjamin List and David W.C. MacMillan "for the development of asymmetric organocatalysis."
Photocatalysts
Photocatalysis is the phenomenon where the catalyst can receive light to generate an excited state that effect redox reactions. Singlet oxygen is usually produced by photocatalysis. Photocatalysts are components of dye-sensitized solar cells.
Enzymes and biocatalysts
In biology, enzymes are protein-based catalysts in metabolism and catabolism. Most biocatalysts are enzymes, but other non-protein-based classes of biomolecules also exhibit catalytic properties including ribozymes, and synthetic deoxyribozymes.
Biocatalysts can be thought of as an intermediate between homogeneous and heterogeneous catalysts, although strictly speaking soluble enzymes are homogeneous catalysts and membrane-bound enzymes are heterogeneous. Several factors affect the activity of enzymes (and other catalysts) including temperature, pH, the concentration of enzymes, substrate, and products. A particularly important reagent in enzymatic reactions is water, which is the product of many bond-forming reactions and a reactant in many bond-breaking processes.
In biocatalysis, enzymes are employed to prepare many commodity chemicals including high-fructose corn syrup and acrylamide.
Some monoclonal antibodies whose binding target is a stable molecule that resembles the transition state of a chemical reaction can function as weak catalysts for that chemical reaction by lowering its activation energy. Such catalytic antibodies are sometimes called "abzymes".
Significance
Estimates are that 90% of all commercially produced chemical products involve catalysts at some stage in the process of their manufacture. In 2005, catalytic processes generated about $900 billion in products worldwide. Catalysis is so pervasive that subareas are not readily classified. Some areas of particular concentration are surveyed below.
Energy processing
Petroleum refining makes intensive use of catalysis for alkylation, catalytic cracking (breaking long-chain hydrocarbons into smaller pieces), naphtha reforming and steam reforming (conversion of hydrocarbons into synthesis gas). Even the exhaust from the burning of fossil fuels is treated via catalysis: Catalytic converters, typically composed of platinum and rhodium, break down some of the more harmful byproducts of automobile exhaust.
2 CO + 2 NO → 2 CO + N
With regard to synthetic fuels, an old but still important process is the Fischer–Tropsch synthesis of hydrocarbons from synthesis gas, which itself is processed via water-gas shift reactions, catalyzed by iron. The Sabatier reaction produces methane from carbon dioxide and hydrogen. Biodiesel and related biofuels require processing via both inorganic and biocatalysts.
Fuel cells rely on catalysts for both the anodic and cathodic reactions.
Catalytic heaters generate flameless heat from a supply of combustible fuel.
Bulk chemicals
Some of the largest-scale chemicals are produced via catalytic oxidation, often using oxygen. Examples include nitric acid (from ammonia), sulfuric acid (from sulfur dioxide to sulfur trioxide by the contact process), terephthalic acid from p-xylene, acrylic acid from propylene or propane and acrylonitrile from propane and ammonia.
The production of ammonia is one of the largest-scale and most energy-intensive processes. In the Haber process nitrogen is combined with hydrogen over an iron oxide catalyst. Methanol is prepared from carbon monoxide or carbon dioxide but using copper-zinc catalysts.
Bulk polymers derived from ethylene and propylene are often prepared using Ziegler–Natta catalyst. Polyesters, polyamides, and isocyanates are derived via acid–base catalysis.
Most carbonylation processes require metal catalysts, examples include the Monsanto acetic acid process and hydroformylation.
Fine chemicals
Many fine chemicals are prepared via catalysis; methods include those of heavy industry as well as more specialized processes that would be prohibitively expensive on a large scale. Examples include the Heck reaction, and Friedel–Crafts reactions. Because most bioactive compounds are chiral, many pharmaceuticals are produced by enantioselective catalysis (catalytic asymmetric synthesis). (R)-1,2-Propandiol, the precursor to the antibacterial levofloxacin, can be synthesized efficiently from hydroxyacetone by using catalysts based on BINAP-ruthenium complexes, in Noyori asymmetric hydrogenation:
Food processing
One of the most obvious applications of catalysis is the hydrogenation (reaction with hydrogen gas) of fats using nickel catalyst to produce margarine. Many other foodstuffs are prepared via biocatalysis (see below).
Environment
Catalysis affects the environment by increasing the efficiency of industrial processes, but catalysis also plays a direct role in the environment. A notable example is the catalytic role of chlorine free radicals in the breakdown of ozone. These radicals are formed by the action of ultraviolet radiation on chlorofluorocarbons (CFCs).
Cl + O → ClO + O
ClO + O → Cl + O
History
The term "catalyst", broadly defined as anything that increases the rate of a process, is derived from Greek καταλύειν, meaning "to annul", or "to untie", or "to pick up". The concept of catalysis was invented by chemist Elizabeth Fulhame and described in a 1794 book, based on her novel work in oxidation–reduction reactions. The first chemical reaction in organic chemistry that knowingly used a catalyst was studied in 1811 by Gottlieb Kirchhoff, who discovered the acid-catalyzed conversion of starch to glucose. The term catalysis was later used by Jöns Jakob Berzelius in 1835 to describe reactions that are accelerated by substances that remain unchanged after the reaction. Fulhame, who predated Berzelius, did work with water as opposed to metals in her reduction experiments. Other 18th century chemists who worked in catalysis were Eilhard Mitscherlich who referred to it as contact processes, and Johann Wolfgang Döbereiner who spoke of contact action. He developed Döbereiner's lamp, a lighter based on hydrogen and a platinum sponge, which became a commercial success in the 1820s that lives on today. Humphry Davy discovered the use of platinum in catalysis. In the 1880s, Wilhelm Ostwald at Leipzig University started a systematic investigation into reactions that were catalyzed by the presence of acids and bases, and found that chemical reactions occur at finite rates and that these rates can be used to determine the strengths of acids and bases. For this work, Ostwald was awarded the 1909 Nobel Prize in Chemistry. Vladimir Ipatieff performed some of the earliest industrial scale reactions, including the discovery and commercialization of oligomerization and the development of catalysts for hydrogenation.
Inhibitors, poisons, and promoters
An added substance that lowers the rate is called a reaction inhibitor if reversible and catalyst poisons if irreversible. Promoters are substances that increase the catalytic activity, even though they are not catalysts by themselves.
Inhibitors are sometimes referred to as "negative catalysts" since they decrease the reaction rate. However the term inhibitor is preferred since they do not work by introducing a reaction path with higher activation energy; this would not lower the rate since the reaction would continue to occur by the non-catalyzed path. Instead, they act either by deactivating catalysts or by removing reaction intermediates such as free radicals. In heterogeneous catalysis, coking inhibits the catalyst, which becomes covered by polymeric side products.
The inhibitor may modify selectivity in addition to rate. For instance, in the hydrogenation of alkynes to alkenes, a palladium (Pd) catalyst partly "poisoned" with lead(II) acetate (Pb(CHCO)) can be used (Lindlar catalyst). Without the deactivation of the catalyst, the alkene produced would be further hydrogenated to alkane.
The inhibitor can produce this effect by, e.g., selectively poisoning only certain types of active sites. Another mechanism is the modification of surface geometry. For instance, in hydrogenation operations, large planes of metal surface function as sites of hydrogenolysis catalysis while sites catalyzing hydrogenation of unsaturates are smaller. Thus, a poison that covers the surface randomly will tend to lower the number of uncontaminated large planes but leave proportionally smaller sites free, thus changing the hydrogenation vs. hydrogenolysis selectivity. Many other mechanisms are also possible.
Promoters can cover up the surface to prevent the production of a mat of coke, or even actively remove such material (e.g., rhenium on platinum in platforming). They can aid the dispersion of the catalytic material or bind to reagents.
See also
References
External links
Science Aid: Catalysts Page for high school level science
W.A. Herrmann Technische Universität presentation
Alumite Catalyst, Kameyama-Sakurai Laboratory, Japan
Inorganic Chemistry and Catalysis Group, Utrecht University, The Netherlands
Centre for Surface Chemistry and Catalysis
CarboCat Laboratory, University of Concepcion, Chile
NSF CENTC, Center for Enabling New Technologies (through catalysis)
"Bubbles turn on chemical catalysts", Science News, April 6, 2009.
Catalysis
Chemical kinetics
Articles containing video clips | Catalysis | [
"Chemistry"
] | 5,147 | [
"Catalysis",
"Chemical kinetics",
"Chemical reaction engineering"
] |
5,916 | https://en.wikipedia.org/wiki/Circumference | In geometry, the circumference (from Latin circumferens, meaning "carrying around") is the perimeter of a circle or ellipse. The circumference is the arc length of the circle, as if it were opened up and straightened out to a line segment. More generally, the perimeter is the curve length around any closed figure.
Circumference may also refer to the circle itself, that is, the locus corresponding to the edge of a disk.
The is the circumference, or length, of any one of its great circles.
Circle
The circumference of a circle is the distance around it, but if, as in many elementary treatments, distance is defined in terms of straight lines, this cannot be used as a definition. Under these circumstances, the circumference of a circle may be defined as the limit of the perimeters of inscribed regular polygons as the number of sides increases without bound. The term circumference is used when measuring physical objects, as well as when considering abstract geometric forms.
Relationship with
The circumference of a circle is related to one of the most important mathematical constants. This constant, pi, is represented by the Greek letter Its first few decimal digits are 3.141592653589793... Pi is defined as the ratio of a circle's circumference to its diameter
Or, equivalently, as the ratio of the circumference to twice the radius. The above formula can be rearranged to solve for the circumference:
The ratio of the circle's circumference to its radius is equivalent to . This is also the number of radians in one turn. The use of the mathematical constant is ubiquitous in mathematics, engineering, and science.
In Measurement of a Circle written circa 250 BCE, Archimedes showed that this ratio (written as since he did not use the name ) was greater than 3 but less than 3 by calculating the perimeters of an inscribed and a circumscribed regular polygon of 96 sides. This method for approximating was used for centuries, obtaining more accuracy by using polygons of larger and larger number of sides. The last such calculation was performed in 1630 by Christoph Grienberger who used polygons with 1040 sides.
Ellipse
Some authors use circumference to denote the perimeter of an ellipse. There is no general formula for the circumference of an ellipse in terms of the semi-major and semi-minor axes of the ellipse that uses only elementary functions. However, there are approximate formulas in terms of these parameters. One such approximation, due to Euler (1773), for the canonical ellipse,
is
Some lower and upper bounds on the circumference of the canonical ellipse with are:
Here the upper bound is the circumference of a circumscribed concentric circle passing through the endpoints of the ellipse's major axis, and the lower bound is the perimeter of an inscribed rhombus with vertices at the endpoints of the major and minor axes.
The circumference of an ellipse can be expressed exactly in terms of the complete elliptic integral of the second kind. More precisely,
where is the length of the semi-major axis and is the eccentricity
See also
Notes
References
External links
Numericana - Circumference of an ellipse
Geometric measurement
Circles | Circumference | [
"Physics",
"Mathematics"
] | 722 | [
"Geometric measurement",
"Physical quantities",
"Quantity",
"Geometry",
"Circles",
"Pi"
] |
5,918 | https://en.wikipedia.org/wiki/Continuum%20mechanics | Continuum mechanics is a branch of mechanics that deals with the deformation of and transmission of forces through materials modeled as a continuous medium (also called a continuum) rather than as discrete particles.
Continuum mechanics deals with deformable bodies, as opposed to rigid bodies.
A continuum model assumes that the substance of the object completely fills the space it occupies. While ignoring the fact that matter is made of atoms, this provides a sufficiently accurate description of matter on length scales much greater than that of inter-atomic distances. The concept of a continuous medium allows for intuitive analysis of bulk matter by using differential equations that describe the behavior of such matter according to physical laws, such as mass conservation, momentum conservation, and energy conservation. Information about the specific material is expressed in constitutive relationships.
Continuum mechanics treats the physical properties of solids and fluids independently of any particular coordinate system in which they are observed. These properties are represented by tensors, which are mathematical objects with the salient property of being independent of coordinate systems. This permits definition of physical properties at any point in the continuum, according to mathematically convenient continuous functions. The theories of elasticity, plasticity and fluid mechanics are based on the concepts of continuum mechanics.
Concept of a continuum
The concept of a continuum underlies the mathematical framework for studying large-scale forces and deformations in materials. Although materials are composed of discrete atoms and molecules, separated by empty space or microscopic cracks and crystallographic defects, physical phenomena can often be modeled by considering a substance distributed throughout some region of space. A continuum is a body that can be continually sub-divided into infinitesimal elements with local material properties defined at any particular point. Properties of the bulk material can therefore be described by continuous functions, and their evolution can be studied using the mathematics of calculus.
Apart from the assumption of continuity, two other independent assumptions are often employed in the study of continuum mechanics. These are homogeneity (assumption of identical properties at all locations) and isotropy (assumption of directionally invariant vector properties). If these auxiliary assumptions are not globally applicable, the material may be segregated into sections where they are applicable in order to simplify the analysis. For more complex cases, one or both of these assumptions can be dropped. In these cases, computational methods are often used to solve the differential equations describing the evolution of material properties.
Major areas
An additional area of continuum mechanics comprises elastomeric foams, which exhibit a curious hyperbolic stress-strain relationship. The elastomer is a true continuum, but a homogeneous distribution of voids gives it unusual properties.
Formulation of models
Continuum mechanics models begin by assigning a region in three-dimensional Euclidean space to the material body being modeled. The points within this region are called particles or material points. Different configurations or states of the body correspond to different regions in Euclidean space. The region corresponding to the body's configuration at time is labeled .
A particular particle within the body in a particular configuration is characterized by a position vector
where are the coordinate vectors in some frame of reference chosen for the problem (See figure 1). This vector can be expressed as a function of the particle position in some reference configuration, for example the configuration at the initial time, so that
This function needs to have various properties so that the model makes physical sense. needs to be:
continuous in time, so that the body changes in a way which is realistic,
globally invertible at all times, so that the body cannot intersect itself,
orientation-preserving, as transformations which produce mirror reflections are not possible in nature.
For the mathematical formulation of the model, is also assumed to be twice continuously differentiable, so that differential equations describing the motion may be formulated.
Forces in a continuum
A solid is a deformable body that possesses shear strength, sc. a solid can support shear forces (forces parallel to the material surface on which they act). Fluids, on the other hand, do not sustain shear forces.
Following the classical dynamics of Newton and Euler, the motion of a material body is produced by the action of externally applied forces which are assumed to be of two kinds: surface forces and body forces . Thus, the total force applied to a body or to a portion of the body can be expressed as:
Surface forces
Surface forces or contact forces, expressed as force per unit area, can act either on the bounding surface of the body, as a result of mechanical contact with other bodies, or on imaginary internal surfaces that bound portions of the body, as a result of the mechanical interaction between the parts of the body to either side of the surface (Euler-Cauchy's stress principle). When a body is acted upon by external contact forces, internal contact forces are then transmitted from point to point inside the body to balance their action, according to Newton's third law of motion of conservation of linear momentum and angular momentum (for continuous bodies these laws are called the Euler's equations of motion). The internal contact forces are related to the body's deformation through constitutive equations. The internal contact forces may be mathematically described by how they relate to the motion of the body, independent of the body's material makeup.
The distribution of internal contact forces throughout the volume of the body is assumed to be continuous. Therefore, there exists a contact force density or Cauchy traction field that represents this distribution in a particular configuration of the body at a given time . It is not a vector field because it depends not only on the position of a particular material point, but also on the local orientation of the surface element as defined by its normal vector .
Any differential area with normal vector of a given internal surface area , bounding a portion of the body, experiences a contact force arising from the contact between both portions of the body on each side of , and it is given by
where is the surface traction, also called stress vector, traction, or traction vector. The stress vector is a frame-indifferent vector (see Euler-Cauchy's stress principle).
The total contact force on the particular internal surface is then expressed as the sum (surface integral) of the contact forces on all differential surfaces :
In continuum mechanics a body is considered stress-free if the only forces present are those inter-atomic forces (ionic, metallic, and van der Waals forces) required to hold the body together and to keep its shape in the absence of all external influences, including gravitational attraction. Stresses generated during manufacture of the body to a specific configuration are also excluded when considering stresses in a body. Therefore, the stresses considered in continuum mechanics are only those produced by deformation of the body, sc. only relative changes in stress are considered, not the absolute values of stress.
Body forces
Body forces are forces originating from sources outside of the body that act on the volume (or mass) of the body. Saying that body forces are due to outside sources implies that the interaction between different parts of the body (internal forces) are manifested through the contact forces alone. These forces arise from the presence of the body in force fields, e.g. gravitational field (gravitational forces) or electromagnetic field (electromagnetic forces), or from inertial forces when bodies are in motion. As the mass of a continuous body is assumed to be continuously distributed, any force originating from the mass is also continuously distributed. Thus, body forces are specified by vector fields which are assumed to be continuous over the entire volume of the body, i.e. acting on every point in it. Body forces are represented by a body force density (per unit of mass), which is a frame-indifferent vector field.
In the case of gravitational forces, the intensity of the force depends on, or is proportional to, the mass density of the material, and it is specified in terms of force per unit mass () or per unit volume (). These two specifications are related through the material density by the equation . Similarly, the intensity of electromagnetic forces depends upon the strength (electric charge) of the electromagnetic field.
The total body force applied to a continuous body is expressed as
Body forces and contact forces acting on the body lead to corresponding moments of force (torques) relative to a given point. Thus, the total applied torque about the origin is given by
In certain situations, not commonly considered in the analysis of the mechanical behavior of materials, it becomes necessary to include two other types of forces: these are couple stresses (surface couples, contact torques) and body moments. Couple stresses are moments per unit area applied on a surface. Body moments, or body couples, are moments per unit volume or per unit mass applied to the volume of the body. Both are important in the analysis of stress for a polarized dielectric solid under the action of an electric field, materials where the molecular structure is taken into consideration (e.g. bones), solids under the action of an external magnetic field, and the dislocation theory of metals.
Materials that exhibit body couples and couple stresses in addition to moments produced exclusively by forces are called polar materials. Non-polar materials are then those materials with only moments of forces. In the classical branches of continuum mechanics the development of the theory of stresses is based on non-polar materials.
Thus, the sum of all applied forces and torques (with respect to the origin of the coordinate system) in the body can be given by
Kinematics: motion and deformation
A change in the configuration of a continuum body results in a displacement. The displacement of a body has two components: a rigid-body displacement and a deformation. A rigid-body displacement consists of a simultaneous translation and rotation of the body without changing its shape or size. Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration to a current or deformed configuration (Figure 2).
The motion of a continuum body is a continuous time sequence of displacements. Thus, the material body will occupy different configurations at different times so that a particle occupies a series of points in space which describe a path line.
There is continuity during motion or deformation of a continuum body in the sense that:
The material points forming a closed curve at any instant will always form a closed curve at any subsequent time.
The material points forming a closed surface at any instant will always form a closed surface at any subsequent time and the matter within the closed surface will always remain within.
It is convenient to identify a reference configuration or initial condition which all subsequent configurations are referenced from. The reference configuration need not be one that the body will ever occupy. Often, the configuration at is considered the reference configuration, . The components of the position vector of a particle, taken with respect to the reference configuration, are called the material or reference coordinates.
When analyzing the motion or deformation of solids, or the flow of fluids, it is necessary to describe the sequence or evolution of configurations throughout time. One description for motion is made in terms of the material or referential coordinates, called material description or Lagrangian description.
Lagrangian description
In the Lagrangian description the position and physical properties of the particles are described in terms of the material or referential coordinates and time. In this case the reference configuration is the configuration at . An observer standing in the frame of reference observes the changes in the position and physical properties as the material body moves in space as time progresses. The results obtained are independent of the choice of initial time and reference configuration, . This description is normally used in solid mechanics.
In the Lagrangian description, the motion of a continuum body is expressed by the mapping function (Figure 2),
which is a mapping of the initial configuration onto the current configuration , giving a geometrical correspondence between them, i.e. giving the position vector that a particle , with a position vector in the undeformed or reference configuration , will occupy in the current or deformed configuration at time . The components are called the spatial coordinates.
Physical and kinematic properties , i.e. thermodynamic properties and flow velocity, which describe or characterize features of the material body, are expressed as continuous functions of position and time, i.e. .
The material derivative of any property of a continuum, which may be a scalar, vector, or tensor, is the time rate of change of that property for a specific group of particles of the moving continuum body. The material derivative is also known as the substantial derivative, or comoving derivative, or convective derivative. It can be thought as the rate at which the property changes when measured by an observer traveling with that group of particles.
In the Lagrangian description, the material derivative of is simply the partial derivative with respect to time, and the position vector is held constant as it does not change with time. Thus, we have
The instantaneous position is a property of a particle, and its material derivative is the instantaneous flow velocity of the particle. Therefore, the flow velocity field of the continuum is given by
Similarly, the acceleration field is given by
Continuity in the Lagrangian description is expressed by the spatial and temporal continuity of the mapping from the reference configuration to the current configuration of the material points. All physical quantities characterizing the continuum are described this way. In this sense, the function and are single-valued and continuous, with continuous derivatives with respect to space and time to whatever order is required, usually to the second or third.
Eulerian description
Continuity allows for the inverse of to trace backwards where the particle currently located at was located in the initial or referenced configuration . In this case the description of motion is made in terms of the spatial coordinates, in which case is called the spatial description or Eulerian description, i.e. the current configuration is taken as the reference configuration.
The Eulerian description, introduced by d'Alembert, focuses on the current configuration , giving attention to what is occurring at a fixed point in space as time progresses, instead of giving attention to individual particles as they move through space and time. This approach is conveniently applied in the study of fluid flow where the kinematic property of greatest interest is the rate at which change is taking place rather than the shape of the body of fluid at a reference time.
Mathematically, the motion of a continuum using the Eulerian description is expressed by the mapping function
which provides a tracing of the particle which now occupies the position in the current configuration to its original position in the initial configuration .
A necessary and sufficient condition for this inverse function to exist is that the determinant of the Jacobian matrix, often referred to simply as the Jacobian, should be different from zero. Thus,
In the Eulerian description, the physical properties are expressed as
where the functional form of in the Lagrangian description is not the same as the form of in the Eulerian description.
The material derivative of , using the chain rule, is then
The first term on the right-hand side of this equation gives the local rate of change of the property occurring at position . The second term of the right-hand side is the convective rate of change and expresses the contribution of the particle changing position in space (motion).
Continuity in the Eulerian description is expressed by the spatial and temporal continuity and continuous differentiability of the flow velocity field. All physical quantities are defined this way at each instant of time, in the current configuration, as a function of the vector position .
Displacement field
The vector joining the positions of a particle in the undeformed configuration and deformed configuration is called the displacement vector , in the Lagrangian description, or , in the Eulerian description.
A displacement field is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. It is convenient to do the analysis of deformation or motion of a continuum body in terms of the displacement field, In general, the displacement field is expressed in terms of the material coordinates as
or in terms of the spatial coordinates as
where are the direction cosines between the material and spatial coordinate systems with unit vectors and , respectively. Thus
and the relationship between and is then given by
Knowing that
then
It is common to superimpose the coordinate systems for the undeformed and deformed configurations, which results in , and the direction cosines become Kronecker deltas, i.e.
Thus, we have
or in terms of the spatial coordinates as
Governing equations
Continuum mechanics deals with the behavior of materials that can be approximated as continuous for certain length and time scales. The equations that govern the mechanics of such materials include the balance laws for mass, momentum, and energy. Kinematic relations and constitutive equations are needed to complete the system of governing equations. Physical restrictions on the form of the constitutive relations can be applied by requiring that the second law of thermodynamics be satisfied under all conditions. In the continuum mechanics of solids, the second law of thermodynamics is satisfied if the Clausius–Duhem form of the entropy inequality is satisfied.
The balance laws express the idea that the rate of change of a quantity (mass, momentum, energy) in a volume must arise from three causes:
the physical quantity itself flows through the surface that bounds the volume,
there is a source of the physical quantity on the surface of the volume, or/and,
there is a source of the physical quantity inside the volume.
Let be the body (an open subset of Euclidean space) and let be its surface (the boundary of ).
Let the motion of material points in the body be described by the map
where is the position of a point in the initial configuration and is the location of the same point in the deformed configuration.
The deformation gradient is given by
Balance laws
Let be a physical quantity that is flowing through the body. Let be sources on the surface of the body and let be sources inside the body. Let be the outward unit normal to the surface . Let be the flow velocity of the physical particles that carry the physical quantity that is flowing. Also, let the speed at which the bounding surface is moving be (in the direction ).
Then, balance laws can be expressed in the general form
The functions , , and can be scalar valued, vector valued, or tensor valued - depending on the physical quantity that the balance equation deals with. If there are internal boundaries in the body, jump discontinuities also need to be specified in the balance laws.
If we take the Eulerian point of view, it can be shown that the balance laws of mass, momentum, and energy for a solid can be written as (assuming the source term is zero for the mass and angular momentum equations)
In the above equations is the mass density (current), is the material time derivative of , is the particle velocity, is the material time derivative of , is the Cauchy stress tensor, is the body force density, is the internal energy per unit mass, is the material time derivative of , is the heat flux vector, and is an energy source per unit mass. The operators used are defined below.
With respect to the reference configuration (the Lagrangian point of view), the balance laws can be written as
In the above, is the first Piola-Kirchhoff stress tensor, and is the mass density in the reference configuration. The first Piola-Kirchhoff stress tensor is related to the Cauchy stress tensor by
We can alternatively define the nominal stress tensor which is the transpose of the first Piola-Kirchhoff stress tensor such that
Then the balance laws become
Operators
The operators in the above equations are defined as
where is a vector field, is a second-order tensor field, and are the components of an orthonormal basis in the current configuration. Also,
where is a vector field, is a second-order tensor field, and are the components of an orthonormal basis in the reference configuration.
The inner product is defined as
Clausius–Duhem inequality
The Clausius–Duhem inequality can be used to express the second law of thermodynamics for elastic-plastic materials. This inequality is a statement concerning the irreversibility of natural processes, especially when energy dissipation is involved.
Just like in the balance laws in the previous section, we assume that there is a flux of a quantity, a source of the quantity, and an internal density of the quantity per unit mass. The quantity of interest in this case is the entropy. Thus, we assume that there is an entropy flux, an entropy source, an internal mass density and an internal specific entropy (i.e. entropy per unit mass) in the region of interest.
Let be such a region and let be its boundary. Then the second law of thermodynamics states that the rate of increase of in this region is greater than or equal to the sum of that supplied to (as a flux or from internal sources) and the change of the internal entropy density due to material flowing in and out of the region.
Let move with a flow velocity and let particles inside have velocities . Let be the unit outward normal to the surface . Let be the density of matter in the region, be the entropy flux at the surface, and be the entropy source per unit mass.
Then the entropy inequality may be written as
The scalar entropy flux can be related to the vector flux at the surface by the relation . Under the assumption of incrementally isothermal conditions, we have
where is the heat flux vector, is an energy source per unit mass, and is the absolute temperature of a material point at at time .
We then have the Clausius–Duhem inequality in integral form:
We can show that the entropy inequality may be written in differential form as
In terms of the Cauchy stress and the internal energy, the Clausius–Duhem inequality may be written as
Validity
The validity of the continuum assumption may be verified by a theoretical analysis, in which either some clear periodicity is identified or statistical homogeneity and ergodicity of the microstructure exist. More specifically, the continuum hypothesis hinges on the concepts of a representative elementary volume and separation of scales based on the Hill–Mandel condition. This condition provides a link between an experimentalist's and a theoretician's viewpoint on constitutive equations (linear and nonlinear elastic/inelastic or coupled fields) as well as a way of spatial and statistical averaging of the microstructure.
When the separation of scales does not hold, or when one wants to establish a continuum of a finer resolution than the size of the representative volume element (RVE), a statistical volume element (SVE) is employed, which results in random continuum fields. The latter then provide a micromechanics basis for stochastic finite elements (SFE). The levels of SVE and RVE link continuum mechanics to statistical mechanics. Experimentally, the RVE can only be evaluated when the constitutive response is spatially homogenous.
Applications
Continuum mechanics
Solid mechanics
Fluid mechanics
Engineering
Civil engineering
Mechanical engineering
Aerospace engineering
Biomedical engineering
Chemical engineering
See also
Transport phenomena
Bernoulli's principle
Cauchy elastic material
Configurational mechanics
Curvilinear coordinates
Equation of state
Finite deformation tensors
Finite strain theory
Hyperelastic material
Lagrangian and Eulerian specification of the flow field
Movable cellular automaton
Peridynamics (a non-local continuum theory leading to integral equations)
Stress (physics)
Stress measures
Tensor calculus
Tensor derivative (continuum mechanics)
Theory of elasticity
Knudsen number
Explanatory notes
References
Citations
Works cited
General references
External links
"Objectivity in classical continuum mechanics: Motions, Eulerian and Lagrangian functions; Deformation gradient; Lie derivatives; Velocity-addition formula, Coriolis; Objectivity" by Gilles Leborgne, April 7, 2021: "Part IV Velocity-addition formula and Objectivity"
Classical mechanics | Continuum mechanics | [
"Physics"
] | 4,907 | [
"Mechanics",
"Classical mechanics",
"Continuum mechanics"
] |
5,926 | https://en.wikipedia.org/wiki/Computation | A computation is any type of arithmetic or non-arithmetic calculation that is well-defined. Common examples of computation are mathematical equation solving and the execution of computer algorithms.
Mechanical or electronic devices (or, historically, people) that perform computations are known as computers.
Computer science is an academic field that involves the study of computation.
Introduction
The notion that mathematical statements should be 'well-defined' had been argued by mathematicians since at least the 1600s, but agreement on a suitable definition proved elusive. A candidate definition was proposed independently by several mathematicians in the 1930s. The best-known variant was formalised by the mathematician Alan Turing, who defined a well-defined statement or calculation as any statement that could be expressed in terms of the initialisation parameters of a Turing machine. Other (mathematically equivalent) definitions include Alonzo Church's lambda-definability, Herbrand-Gödel-Kleene's general recursiveness and Emil Post's 1-definability.
Today, any formal statement or calculation that exhibits this quality of well-definedness is termed computable, while the statement or calculation itself is referred to as a computation.
Turing's definition apportioned "well-definedness" to a very large class of mathematical statements, including all well-formed algebraic statements, and all statements written in modern computer programming languages.
Despite the widespread uptake of this definition, there are some mathematical concepts that have no well-defined characterisation under this definition. This includes the halting problem and the busy beaver game. It remains an open question as to whether there exists a more powerful definition of 'well-defined' that is able to capture both computable and 'non-computable' statements.
Some examples of mathematical statements that are computable include:
All statements characterised in modern programming languages, including C++, Python, and Java.
All calculations carried by an electronic computer, calculator or abacus.
All calculations carried out on an analytical engine.
All calculations carried out on a Turing Machine.
The majority of mathematical statements and calculations given in maths textbooks.
Some examples of mathematical statements that are not computable include:
Calculations or statements which are ill-defined, such that they cannot be unambiguously encoded into a Turing machine: ("Paul loves me twice as much as Joe").
Problem statements which do appear to be well-defined, but for which it can be proved that no Turing machine exists to solve them (such as the halting problem).
The Physical process of computation
Computation can be seen as a purely physical process occurring inside a closed physical system called a computer. Turing's 1937 proof, On Computable Numbers, with an Application to the Entscheidungsproblem, demonstrated that there is a formal equivalence between computable statements and particular physical systems, commonly called computers. Examples of such physical systems are: Turing machines, human mathematicians following strict rules, digital computers, mechanical computers, analog computers and others.
Alternative accounts of computation
The mapping account
An alternative account of computation is found throughout the works of Hilary Putnam and others. Peter Godfrey-Smith has dubbed this the "simple mapping account." Gualtiero Piccinini's summary of this account states that a physical system can be said to perform a specific computation when there is a mapping between the state of that system and the computation such that the "microphysical states [of the system] mirror the state transitions between the computational states."
The semantic account
Philosophers such as Jerry Fodor have suggested various accounts of computation with the restriction that semantic content be a necessary condition for computation (that is, what differentiates an arbitrary physical system from a computing system is that the operands of the computation represent something). This notion attempts to prevent the logical abstraction of the mapping account of pancomputationalism, the idea that everything can be said to be computing everything.
The mechanistic account
Gualtiero Piccinini proposes an account of computation based on mechanical philosophy. It states that physical computing systems are types of mechanisms that, by design, perform physical computation, or the manipulation (by a functional mechanism) of a "medium-independent" vehicle according to a rule. "Medium-independence" requires that the property can be instantiated by multiple realizers and multiple mechanisms, and that the inputs and outputs of the mechanism also be multiply realizable. In short, medium-independence allows for the use of physical variables with properties other than voltage (as in typical digital computers); this is imperative in considering other types of computation, such as that which occurs in the brain or in a quantum computer. A rule, in this sense, provides a mapping among inputs, outputs, and internal states of the physical computing system.
Mathematical models
In the theory of computation, a diversity of mathematical models of computation has been developed.
Typical mathematical models of computers are the following:
State models including Turing machine, pushdown automaton, finite-state automaton, and PRAM
Functional models including lambda calculus
Logical models including logic programming
Concurrent models including actor model and process calculi
Giunti calls the models studied by computation theory computational systems, and he argues that all of them are mathematical dynamical systems with discrete time and discrete state space. He maintains that a computational system is a complex object which consists of three parts. First, a mathematical dynamical system with discrete time and discrete state space; second, a computational setup , which is made up of a theoretical part , and a real part ; third, an interpretation , which links the dynamical system with the setup .
See also
Computability theory
Hypercomputation
Computational problem
Limits of computation
Computationalism
Notes
References
Theoretical computer science
Computability theory | Computation | [
"Mathematics"
] | 1,171 | [
"Computability theory",
"Applied mathematics",
"Mathematical logic",
"Theoretical computer science"
] |
5,931 | https://en.wikipedia.org/wiki/Cycling | Cycling, also known as bicycling or biking, is the activity of riding a bicycle or other type of cycle. It encompasses the use of human-powered vehicles such as balance bikes, unicycles, tricycles, and quadricycles. Cycling is practised around the world for purposes including transport, recreation, exercise, and competitive sport.
History
Cycling became popularized in Europe and North America in the latter part and especially the last decade of the 19th century. Today, over 50 percent of the human population knows how to ride a bike.
War
The bicycle has been used as a method of reconnaissance as well as transporting soldiers and supplies to combat zones. In this it has taken over many of the functions of horses in warfare. In the Second Boer War, both sides used bicycles for scouting. In World War I, France, Germany, Australia and New Zealand used bicycles to move troops. In its 1937 invasion of China, Japan employed some 50,000 bicycle troops, and similar forces were instrumental in Japan's march or "roll" through Malaya in World War II. Germany used bicycles again in World War II, while the British employed airborne "Cycle-commandos" with folding bikes.
In the Vietnam War, communist forces used bicycles extensively as cargo carriers along the Ho Chi Minh Trail.
The last country known to maintain a regiment of bicycle troops was Switzerland, which disbanded its last unit in 2003.
Equipment
In many countries, the most commonly used vehicle for road transport is a utility bicycle. These have frames with relaxed geometry, protecting the rider from shocks of the road and easing steering at low speeds. Utility bicycles tend to be equipped with accessories such as mudguards, pannier racks and lights, which extend their usefulness on a daily basis. Since the bicycle is so effective as a means of transportation, various companies have developed methods of carrying anything from the weekly shop to children on bicycles. Certain countries rely heavily on bicycles and their culture has developed around the bicycle as a primary form of transport. In Europe, Denmark and the Netherlands have the most bicycles per capita and most often use bicycles for everyday transport.
Road bikes tend to have a more upright shape and a shorter wheelbase, which make the bike more mobile but harder to ride slowly. The design, coupled with low or dropped handlebars, requires the rider to bend forward more, making use of stronger muscles (particularly the gluteus maximus) and reducing air resistance at high speed.
Road bikes are designed for speed and efficiency on paved roads. They are characterized by their lightweight frames, skinny tires, drop handlebars, and narrow saddles. Road bikes are ideal for racing, long-distance riding, and fitness training.
Other common types of bikes include gravel bikes, designed for use on gravel roads or trails, but with the ability to ride well on pavement; mountain bikes, which are designed for more rugged, undulating terrain; and e-bikes, which provide some level of motorized assist for the rider. There are additional variations of bikes and types of biking as well.
The price of a new bicycle can range from US$50 to more than US$20,000 (the highest priced bike in the world is the custom Madone by Damien Hirst, sold at US$500,000), depending on quality, type and weight (the most exotic road bicycles can weigh as little as 3.2 kg (7 lb)). However, UCI regulations stipulate a legal race bike cannot weigh less than 6.8 kg (14.99 lbs). Being measured for a bike and taking it for a test ride are recommended before buying.
The drivetrain components of the bike should also be considered. A middle grade dérailleur is sufficient for a beginner, although many utility bikes are equipped with hub gears. If the rider plans a significant amount of hillclimbing, a triple-chainrings crankset gear system may be preferred. Otherwise, the relatively lighter, simpler, and less expensive double chainring is preferred, even on high-end race bikes. Much simpler fixed wheel bikes are also available.
Many road bikes, along with mountain bikes, include clipless pedals to which special shoes attach, via a cleat, enabling the rider to pull on the pedals as well as push. Other possible accessories for the bicycle include front and rear lights, bells or horns, child carrying seats, cycling computers with GPS, locks, bar tape, fenders (mud-guards), baggage racks, baggage carriers and pannier bags, water bottles and bottle cages.
For basic maintenance and repairs cyclists can carry a pump (or a CO2 cartridge), a puncture repair kit, a spare inner tube, and tire levers and a set of allen keys. Cycling can be more efficient and comfortable with special shoes, gloves, and shorts. In wet weather, riding can be more tolerable with waterproof clothes, such as cape, jacket, trousers (pants) and overshoes and high-visibility clothing is advisable to reduce the risk from motor vehicle users.
Items legally required in some jurisdictions, or voluntarily adopted for safety reasons, include bicycle helmets, generator or battery operated lights, reflectors, and audible signalling devices such as a bell or horn. Extras include studded tires and a bicycle computer.
Bikes can also be heavily customized, with different seat designs and handle bars, for example. Gears can also be customized to better suit the rider's strength in relation to the terrain.
Skills
Many schools and police departments run educational programs to instruct children in bicycle handling skills, especially to introduce them to the rules of the road as they apply to cyclists. In some countries these may be known as bicycle rodeos, or operated as schemes such as Bikeability in the UK. Education for adult cyclists is available from organizations such as the League of American Bicyclists.
Beyond simply riding, another skill is riding efficiently and safely in traffic. One popular approach to riding in motor vehicle traffic is vehicular cycling, occupying road space as car does. Alternately, in countries such as Denmark and the Netherlands, where cycling is popular, cyclists are often segregated into bike lanes at the side of, or more often separate from, main highways and roads. Many primary schools participate in the national road test in which children individually complete a circuit on roads near the school while being observed by testers.
Infrastructure
Cyclists, pedestrians and motorists make different demands on road design which may lead to conflicts. Some jurisdictions give priority to motorized traffic, for example setting up one-way street systems, free-right turns, high capacity roundabouts, and slip roads. Others share priority with cyclists so as to encourage more cycling by applying varying combinations of traffic calming measures to limit the impact of motorized transport, and by building bike lanes, bike paths and cycle tracks. The provision of cycling infrastructure varies widely between cities and countries, particularly since cycling for transportation almost entirely occurs in public streets. And, the development of computer vision and street view imagery has provided significant potential to assess infrastructure for cyclists.
In jurisdictions where motor vehicles were given priority, cycling has tended to decline while in jurisdictions where cycling infrastructure was built, cycling rates have remained steady or increased. Occasionally, extreme measures against cycling may occur. In Shanghai, where bicycles were once the dominant mode of transport, bicycle travel on a few city roads was banned temporarily in December 2003.
In areas in which cycling is popular and encouraged, cycle-parking facilities using bicycle stands, lockable mini-garages, and patrolled cycle parks are used to reduce theft. Local governments promote cycling by permitting bicycles to be carried on public transport or by providing external attachment devices on public transport vehicles. Conversely, an absence of secure cycle-parking is a recurring complaint by cyclists from cities with low modal share of cycling.
Extensive cycling infrastructure may be found in some cities. Such dedicated paths in some cities often have to be shared with in-line skaters, scooters, skateboarders, and pedestrians. Dedicated cycling infrastructure is treated differently in the law of every jurisdiction, including the question of liability of users in a collision. There is also some debate about the safety of the various types of separated facilities.
Bicycles are considered a sustainable mode of transport, especially suited for urban use and relatively shorter distances when used for transport (compared to recreation). Case studies and good practices (from European cities and some worldwide examples) that promote and stimulate this kind of functional cycling in cities can be found at Eltis, Europe's portal for local transport.
A number of cities, including Paris, London and Barcelona, now have successful bike hire schemes designed to help people cycle in the city. Typically these feature utilitarian city bikes which lock into docking stations, released on payment for set time periods. Costs vary from city to city. In London, initial hire access costs £2 per day. The first 30 minutes of each trip is free, with £2 for each additional 30 minutes until the bicycle is returned.
In the Netherlands, many roads have one or two separate cycleways alongside them, or cycle lanes marked on the road. On roads where adjacent bike paths or cycle tracks exist, the use of these facilities is compulsory, and cycling on the main carriageway is not permitted. Some 35,000 km of cycle-track has been physically segregated from motor traffic, equal to a quarter of the country's entire 140,000 km road network. A quarter of all trips in the country are made on bicycles, one quarter of them to work. Even the prime minister goes to work by bicycle, when weather permits. This saves the lives of 6,000 citizens per year, prolongs life expectancy by 6 months, saves the country 20 million dollars per year, and prevents 150 grams of from being emitted per kilometer of cycling.
Types
Utility
Utility cycling refers both to cycling as a mode of daily commuting transport as well as the use of a bicycle in a commercial activity, mainly to transport goods, mostly accomplished in an urban environment.
The postal services of many countries have long relied on bicycles. The British Royal Mail first started using bicycles in 1880; now bicycle delivery fleets include 37,000 in the UK, 25,700 in Germany, 10,500 in Hungary and 7000 in Sweden. In Australia, Australia Post has also reintroduced bicycle postal deliveries on some routes due to an inability to recruit sufficient licensed riders willing to use their uncomfortable motorbikes. The London Ambulance Service has recently introduced bicycling paramedics, who can often get to the scene of an incident in Central London more quickly than a motorized ambulance.
The use of bicycles by police has been increasing, since they provide greater accessibility to bicycle and pedestrian zones and allow access when roads are congested. In some cases, bicycle officers have been used as a supplement or a replacement for horseback officers.
Bicycles enjoy substantial use as general delivery vehicles in many countries. In the UK and North America, as their first jobs, generations of teenagers have worked at delivering newspapers by bicycle. London has many delivery companies that use bicycles with trailers. Most cities in the West, and many outside it, support a sizeable and visible industry of cycle couriers who deliver documents and small packages. In India, many of Mumbai's Dabbawalas use bicycles to deliver home cooked lunches to the city's workers. In Bogotá, Colombia the city's largest bakery recently replaced most of its delivery trucks with bicycles. Even the car industry uses bicycles. At the huge Mercedes-Benz factory in Sindelfingen, Germany workers use bicycles, color-coded by department, to move around the factory.
Recreational
Bicycle touring
Bicycles are used for recreation at all ages. Bicycle touring, also known as cyclotourism, involves touring and exploration or sightseeing by bicycle for leisure. Bicycle tourism has been one of the most popular sports for recreational benefit. A brevet or randonnée is an organized long-distance ride.
One popular Dutch pleasure is the enjoyment of relaxed cycling in the countryside of the Netherlands. The land is very flat and full of public bicycle trails and cycle tracks where cyclists are not bothered by cars and other traffic, which makes it ideal for cycling recreation. Many Dutch people subscribe every year to an event called fietsvierdaagse — four days of organised cycling through the local environment. Paris–Brest–Paris (PBP), which began in 1891, is the oldest bicycling event still run on a regular basis on the open road, covers over and imposes a 90-hour time limit. Similar if smaller institutions exist in many countries.
A study conducted in Taiwan improved the environmental quality for bicyclist tourists which demonstrated greater health benefits in tourists and even in natives. The number of bicyclists in Taiwan increased from 700,000 in 2008 to 5.1 million in 2017. Thus, this resulted in more and safer bicycle routes to be established. When cycling, cyclists take into account the safety on the road, bicycle lanes, smooth roads, diverse scenery, and ride length. Thus, the environment plays a huge role in people's decision factor to use bicycle touring more. This study used many questionnaires and conducted statistical analysis to come up with the conclusion of cyclists' top 5 factors that they consider before making a decision to bike are: safety, lighting facility, design of lanes, the surrounding landscape, and how clean the environment is. Thus, after improving these 5 factors, they found much more recreational benefits to bicycle tourism.
Organized rides
Many cycling clubs hold organized rides in which bicyclists of all levels participate. The typical organized ride starts with a large group of riders, called the mass, bunch or even peloton. This will thin out over the course of the ride. Many riders choose to ride together in groups of the same skill level to take advantage of drafting.
Most organized rides, for example cyclosportives (or gran fondos), Challenge Rides or reliability trials, and hill climbs include registration requirements and will provide information either through the mail or online concerning start times and other requirements. Rides usually consist of several different routes, sorted by mileage, and with a certain number of rest stops that usually include refreshments, first aid and maintenance tools. Routes can vary by as much as .
Some organized rides are entirely social events. One example is the monthly San Jose Bike Party which can reach attendance of one to two thousand riders in Summer months.
Mountain
Mountain biking began in the 1970s, originally as a downhill sport, practised on customized cruiser bicycles around Mount Tamalpais. Most mountain biking takes place on dirt roads, trails and in purpose-built parks. Downhill mountain biking has just evolved in the recent years and is performed at places such as Whistler Mountain Bike Park. Slopestyle, a form of downhill, is when riders do tricks such as tailwhips, 360s, backflips and front flips.
There are several disciplines of mountain biking besides downhill, including: cross country (often referred to as XC), all mountain, trail, free ride, and newly popular enduro.
In 2020, due to COVID-19, mountain bikes saw a surge in popularity in the US, with some vendors reporting that they were sold out of bikes under US$1000.
Other
The Marching and Cycling Band HHK from Haarlem (the Netherlands) is one of the few marching bands around the world which also performs on bicycles.
Racing
Shortly after the introduction of bicycles, competitions developed independently in many parts of the world. Early races involving boneshaker style bicycles were predictably fraught with injuries. Large races became popular during the 1890s "Golden Age of Cycling", with events across Europe, and in the U.S. and Japan as well. At one point, almost every major city in the US had a velodrome or two for track racing events, however since the middle of the 20th century cycling has become a minority sport in the US whilst in Continental Europe it continues to be a major sport, particularly in the United Kingdom, France, Belgium, Italy and Spain. The most famous of all bicycle races is the Tour de France. This began in 1903, and continues to capture the attention of the sporting world.
In 1899, Charles Minthorn Murphy became the first man to ride his bicycle a mile in under a minute (hence his nickname, Mile-a-Minute Murphy), which he did by drafting a locomotive at New York's Long Island.
As the bicycle evolved its various forms, different racing formats developed. Road races may involve both team and individual competition, and are contested in various ways. They range from the one-day road race, criterium, and time trial to multi-stage events like the Tour de France and its sister events which make up cycling's Grand Tours. Recumbent bicycles were banned from bike races in 1934 after Marcel Berthet set a new hour record in his Velodyne streamliner (49.992 km on 18 November 1933). Track bicycles are used for track cycling in Velodromes, while cyclo-cross races are held on outdoor terrain, including pavement, grass, and mud. Cyclocross races feature human-made features such as small barriers which riders either bunny hop over or dismount and walk over. Time trial races, another form of road racing require a rider to ride against the clock. Time trials can be performed as a team or as a single rider. Bikes are changed for time trial races, using aero bars. In the past decade, mountain bike racing has also reached international popularity and is even an Olympic sport.
Professional racing organizations place limitations on the bicycles that can be used in the races that they sanction. For example, the Union Cycliste Internationale, the governing body of international cycle sport (which sanctions races such as the Tour de France), decided in the late 1990s to create additional rules which prohibit racing bicycles weighing less than 6.8 kilograms (14.96 pounds). The UCI rules also effectively ban some bicycle frame innovations (such as the recumbent bicycle) by requiring a double triangle structure.
Activism
Many broad and correlated themes run in bicycle activism: one is about advocating the bicycle as an alternative mode of transport, and another is about the creation of conditions to permit and/or encourage bicycle use, both for utility and recreational cycling. Although the first emphasizes the potential for energy and resource conservation and health benefits gained from cycling versus automobile use, is relatively undisputed, the second is the subject of much debate.
It is generally agreed that improved local and inter-city rail services and other methods of mass transportation (including greater provision for cycle carriage on such services) create conditions to encourage bicycle use. However, there are different opinions on the role of various types of cycling infrastructure in building bicycle-friendly cities and roads.
Some bicycle activists (including some traffic management advisers) seek the construction of bike paths, cycle tracks and bike lanes for journeys of all lengths and point to their success in promoting safety and encouraging more people to cycle. Some activists, especially those from the vehicular cycling tradition, view the safety, practicality, and intent of such facilities with suspicion. They favor a more holistic approach based on the 4 'E's: education (of everyone involved), encouragement (to apply the education), enforcement (to protect the rights of others), and engineering (to facilitate travel while respecting every person's equal right to do so). Some groups offer training courses to help cyclists integrate themselves with other traffic.
Critical Mass is an event typically held on the last Friday of every month in cities around the world where bicyclists take to the streets en masse. While the ride was founded with the idea of drawing attention to how unfriendly the city was to bicyclists, the leaderless structure of Critical Mass makes it impossible to assign it any one specific goal. In fact, the purpose of Critical Mass is not formalized beyond the direct action of meeting at a set location and time and traveling as a group through city streets.
There is a long-running cycle helmet debate among activists. The most heated controversy surrounds the topic of compulsory helmet use.
It is paradoxical that in many developing countries cycling is in decline as bicycles are replaced by motorbikes and cars, while in many developed countries cycling is on the rise.
Equality
Within western societies the demographic of those who cycle is often not representative of broader society. Research by TfL in London, UK, suggests that cyclists in London are typically 'white, under 40, male, with medium to high household income.' Studies from large-scale representative data from Germany show that people with higher levels of education cycle substantially more often than those with lower levels of education. Even for trips of the same distance and among people from the same city with the same income level, those with higher education cycle more. As a result, there are various forms of activism focused on diversifying the cycling community. Inspired by the Black Lives Matter movement are organizations such as Street Riders NYC that seek to protest while on bicycles about systemic racism and police brutality. An incidental experience for Street Riders NYC protest participants is the inequity in where safe bicycling infrastructure exists by neighbourhood, which is interpreted as a form of classism within cycling and urbanism. The bicycle has acted as a means for women's liberation and thus has links to feminism.
Associations
Cyclists form associations, both for specific interests (trails development, road maintenance, bike maintenance, urban design, racing clubs, touring clubs, etc.) and for more global goals (energy conservation, pollution reduction, promotion of fitness). Some bicycle clubs and national associations became prominent advocates for improvements to roads and highways. In the United States, the League of American Wheelmen lobbied for the improvement of roads in the last part of the 19th century, founding and leading the national Good Roads Movement. Their model for political organization, as well as the paved roads for which they argued, facilitated the growth of the automobile.
In Europe, the European Cyclists' Federation represents around 70 local, regional and national civil society organisations across more than 40 countries that work to promote cycling as a mode of transport and leisure.
As a sport, cycling is governed internationally by the Union Cycliste Internationale in Switzerland, USA Cycling (merged with the United States Cycling Federation in 1995) in the United States, (for upright bicycles) and by the International Human Powered Vehicle Association (for other HPVs, or human-powered vehicles). Cycling for transport and touring is promoted on a European level by the European Cyclists' Federation, with associated members from Great Britain, Japan and elsewhere. Regular conferences on cycling as transport are held under the auspices of Velo City; global conferences are coordinated by Velo Mondial.
Cycling as a means of transportation
Cycling is widely regarded as an effective and efficient mode of transportation optimal for short to moderate distances.
Bicycles provide numerous possible benefits in comparison with motor vehicles, including the sustained physical exercise involved in cycling, easier parking, increased maneuverability, and access to roads, bike paths and rural trails. Cycling also offers a reduced consumption of fossil fuels, less air and noise pollution, reduced greenhouse gas emissions, and greatly reduced traffic congestion. These have a lower financial cost for users as well as for society at large (negligible damage to roads, less road area required). By fitting bicycle racks on the front of buses, transit agencies can significantly increase the areas they can serve.
Among the disadvantages of cycling are the requirement of bicycles (excepting tricycles or quadricycles) for the rider to have certain level of basic skill to remain upright, the reduced protection in crashes in comparison to motor vehicles, often longer travel time (except in densely populated areas), vulnerability to weather conditions, difficulty in transporting passengers, and the fact that a basic level of fitness is required for cycling moderate to long distances.
Health effects
Cycling provides a variety of health benefits and reduces the risk of cancers, heart disease, and diabetes that are prevalent in sedentary lifestyles. Cycling on stationary bikes have also been used as part of rehabilitation for lower limb injuries, particularly after hip surgery. Individuals who cycle regularly have also reported mental health improvements, including less perceived stress and better vitality.
The health benefits of cycling outweigh the risks, when cycling is compared to a sedentary lifestyle. A Dutch study found that cycling can extend lifespans by up to 14 months, but the risks equated to a reduced lifespan of 40 days or less. Mortality rate reduction was found to be directly correlated to the average time spent cycling, totaling to approximately 6500 deaths prevented by cycling. Cycling in the Netherlands is often safer than in other parts of the world, so the risk-benefit ratio will be different in other regions. Overall, benefits of cycling or walking have been shown to exceed risks by ratios of 9:1 to 96:1 when compared with no exercise at all, including a wide variety of physical and mental outcomes.
Exercise
The physical exercise gained from cycling is generally linked with increased health and well-being. According to the World Health Organization (WHO), physical inactivity is second only to tobacco smoking as a health risk in developed countries, and is associated with 20-30% increased risk of various cancers, heart disease, and diabetes and tens of billions of dollars of healthcare costs. The WHO's 2009 report suggests that increasing physical activity is a public health "best buy", and that cycling is a "highly suitable activity" for this purpose. The charity Sustrans reports that investment in cycling provision can give a 20:1 return from health and other benefits. It has been estimated that, on average, approximately 20 life-years are gained from the health benefits of road bicycling for every life-year lost through injury.
Bicycles are often used by people seeking to improve their fitness and cardiovascular health. Recent studies on the use of cycling for commutes have shown that it reduces the risk of cardiovascular outcomes by 11%, with slightly more risk reduction in women than in men. In addition, cycling is especially helpful for those with arthritis of the lower limbs who are unable to pursue sports that cause impact to the knees and other joints. Since cycling can be used for the practical purpose of transportation, there can be less need for self-discipline to exercise.
Cycling while seated is a relatively non-weight bearing exercise that, like swimming, does little to promote bone density. Cycling up and out of the saddle, on the other hand, does a better job by transferring more of the rider's body weight to the legs. However, excessive cycling while standing can cause knee damage It used to be thought that cycling while standing was less energy efficient, but recent research has proven this not to be true. Other than air resistance, there is no wasted energy from cycling while standing, if it is done correctly.
Cycling on a stationary cycle is frequently advocated as a suitable exercise for rehabilitation, particularly for lower limb injury, owing to the low impact which it has on the joints. In particular, cycling is commonly used within knee rehabilitation programs, to strengthen the quadriceps muscles with minimal stress on the knee ligaments. Further stress of the knee can be relieved by changing seat heights and pedal position to improve the rehabilitation. Cycling is also used for rehabilitation after hip surgery to manage soft-tissue healing, control swelling and pain, and allow a larger range of motion to the nearby muscles earlier during recovery. As a result, many institutions have established a rehabilitation protocol that involves stationary cycling as part of the recovery process. One such protocol offered by Mayo Clinic recommends 2–4 weeks of cycling on an upright stationary bike following hip arthroscopy, starting from 5 minutes per session and slowly increasing to 30 minutes per session. The goal of these sessions are to reduce joint inflammation and maintain the widest range of motion possible with limited pain.
As a response to the increased global sedentary lifestyles and consequent overweight and obesity, one response that has been adopted by many organizations concerned with health and environment is the promotion of Active travel, which seeks to promote walking and cycling as safe and attractive alternatives to motorized transport. Given that many journeys are for relatively short distances, there is considerable scope to replace car use with walking or cycling, though in many settings this may require some infrastructure modification, particularly to attract the less experienced and confident.
An Italian study assessed the impact of cycling for commute on major non-communicable diseases and public healthcare costs. Using a health economic assessment model, the study found a lower incidence of type 2 diabetes, acute myocardial infarction, and stroke in individuals that cycled compared to those that did not actively commute. This model estimated that public healthcare costs would reduce by 5% over a 10-year period.
Illinois designated cycling as its official state exercise in 2007.
Mental health
The effects of cycling on overall mental health have often been studied. A European study surveying participants from seven cities about self-perceived health based on primary modes of transportation reported favorable results in the bicycle use population. The bicycle use group reported predominantly good self-perceived health, less perceived stress, better mental health, better vitality, and less loneliness. The study attributed these results to possible economic benefits and senses of both independence and identity as a member of a cyclist community. An English study recruiting non-cyclist older adults aged 50 to 83 to participate as either conventional pedal bike cyclists, electrically assisted e-bike cyclists, or a non-cyclist control group in outdoor trails measured cognitive function through executive function, spatial reasoning, and memory tests and well-being through questionnaires. The study did not find significant differences in spatial reasoning or memory tests. It did, however, find that both cyclists groups had improved executive function and well-being, both with greater improvement in the e-bike group. This suggested that non-physical factors of cycling such as independence, engagement with the outdoor environment, and mobility play a greater role in improving mental health.
A 15-month randomized controlled trial in the U.S. examined the impact of self-paced cycling on cognitive function in institutionalized older adults without cognitive impairment. Researchers used three cognitive assessments: Mini-Mental State Examination (MMSE), Fuld object memory evaluation, and symbol digit modality test. The study found that long-term cycling for at least 15 minutes per day in older adults without cognitive impairment had a protective effect on cognition and attention.
Cycling has also been shown to be effective adjunct therapy in certain mental health conditions.
Bicycle safety
Cycling suffers from a perception that it is unsafe. This perception is not always backed by hard numbers, because of under reporting of crashes and lack of bicycle use data (amount of cycling, kilometers cycled) which make it hard to assess the risk and monitor changes in risks.
In the UK, fatality rates per mile or kilometre are slightly less than those for walking. In the US, bicycling fatality rates are less than 2/3 of those walking the same distance. However, in the UK for example the fatality and serious injury rates per hour of travel are just over double for cycling than those for walking.
Despite the risk factors associated with bicycling, cyclists have a lower overall mortality rate when compared to other groups. A Danish study in 2000 found that even after adjustment for other risk factors, including leisure time physical activity, those who did not cycle to work experienced a 39% higher mortality rate than those who did.
Injuries (to cyclists, from cycling) can be divided into two types:
Physical trauma (extrinsic)
Overuse (intrinsic)
Physical trauma
Acute physical trauma includes injuries to the head and extremities resulting from falls and collisions. Most cycle deaths result from a collision with a car or heavy goods vehicle. Drivers are at fault in the majority of these crashes. Segregated cycling infrastructure reduces the rate of crashes between bicycles and motor vehicles.
Although a majority of bicycle collisions occur during the day, bicycle lighting is recommended for safety when bicycling at night to increase visibility.
Overuse injuries
Of a study of 518 cyclists, a large majority reported at least one overuse injury, with over one third requiring medical treatment. The most common injury sites were the neck (48.8%) and the knees (41.7%), as well as the groin/buttocks (36.1%), hands (31.1%), and back (30.3%). Women were more likely to suffer from neck and shoulder pain than men.
Many cyclists suffer from overuse injuries to the knees, affecting cyclists at all levels. These are caused by many factors:
Incorrect bicycle fit or adjustment, particularly the saddle.
Incorrect adjustment of clipless pedals.
Too many hills, or too many miles, too early in the training season.
Poor training preparation for long touring rides.
Selecting too high a gear. A lower gear for uphill climb protects the knees, even though muscles may be well able to handle a higher gear.
Overuse injuries, including chronic nerve damage at weight bearing locations, can occur as a result of repeatedly riding a bicycle for extended periods of time. Damage to the ulnar nerve in the palm, carpal tunnel in the wrist, the genitourinary tract or bicycle seat neuropathy may result from overuse. Recumbent bicycles are designed on different ergonomic principles and eliminate pressure from the saddle and handlebars, due to the relaxed riding position.
Note that overuse is a relative term, and capacity varies greatly between individuals. Someone starting out in cycling must be careful to increase length and frequency of cycling sessions slowly, starting for example at an hour or two per day, or a hundred miles or kilometers per week. Bilateral muscular pain is a normal by-product of the training process, whereas unilateral pain may reveal "exercise-induced arterial endofibrosis". Joint pain and numbness are also early signs of overuse injury.
A Spanish study of top triathletes found those who cover more than 186 miles (300 km) a week on their bikes have less than 4% normal looking sperm, where normal adult males would be expected to have from 15% to 20%.
Saddle related
Much work has been done to investigate optimal bicycle saddle shape, size and position, and negative effects of extended use of less than optimal seats or configurations.
Excessive saddle height can cause posterior knee pain, while setting the saddle too low can cause pain in the anterior of the knee. An incorrectly fitted saddle may eventually lead to muscle imbalance. A 25 to 35-degree knee angle is recommended to avoid an overuse injury.
Although cycling is beneficial to health, men can be negatively affected by cycling more than three hours a week due to the significant weight on their perineum, an area located between the scrotum and the anus which hold some of the nerves and arteries that pass to the penis. This weight for continuous hours a week can cause men to experience numbness or tingling which can lead to them losing the ability to achieve an erection due to reduced blood flow; which 13% of males did experience in a study by Norwegian researchers who gathered data from 160 men participating in a long-distance bike tour. Fitting a proper sized seat can prevent this effect. In extreme cases, pudendal nerve entrapment can be a source of intractable perineal pain. Some cyclists with induced pudendal nerve pressure neuropathy gained relief from improvements in saddle position and riding techniques.
The National Institute for Occupational Safety and Health (NIOSH) has investigated the potential health effects of prolonged bicycling in police bicycle patrol units, including the possibility that some bicycle saddles exert excessive pressure on the urogenital area of cyclists, restricting blood flow to the genitals. Their study found that using bicycle seats without protruding noses reduced pressure on the groin by at least 65% and significantly reduced the number of cases of urogenital paresthesia. A follow-up found that 90% of bicycle officers who tried the no-nose seat were using it six months later. NIOSH recommends that riders use a no-nose bicycle seat for workplace bicycling.
Despite rumors to the contrary, there is no scientific evidence linking cycling with testicular cancer.
Exposure to air pollution
One concern is that riding in traffic may expose the cyclist to higher levels of air pollution, especially cyclists regularly traveling on or along busy roads. Some authors have claimed this to be untrue, showing that the pollutant and irritant count within cars is consistently higher, presumably because of limited circulation of air within the car and due to the air intake being directly in the stream of other traffic. Other authors have found small or inconsistent differences in concentrations but claim that exposure of cyclists is higher due to increased minute ventilation and is associated with minor biological changes. A 2010 study estimated that the gained life expectancy from the health benefits of cycling (approximately 3–14 months gained) greatly exceeded the lost life expectancy from air pollution (approximately 0.8–40 days lost). However, a systematic review comparing the effects of air pollution exposure on the health of cyclists was conducted, but the authors concluded that the differing methodologies and measuring parameters of each study made it difficult to compare results and suggested a more holistic approach was needed to accomplish this. The significance of the associated health effect, if any, is unclear but probably much smaller than the health impacts associated with accidents and the health benefits derived from additional physical activity.
See also
Bicycle culture
Cyclability
Cycle sport
Cycling advocacy
Cycling in the Netherlands
Cycling mobility
Fancy Women Bike Ride
History of cycling
List of bicycle-sharing systems
List of films about bicycles and cycling
Masters cycling
Outline of bicycles
References
Aerobic exercise
Articles containing video clips
Emissions reduction
Sustainable transport
Symbols of Illinois | Cycling | [
"Physics",
"Chemistry"
] | 7,674 | [
"Emissions reduction",
"Physical systems",
"Transport",
"Sustainable transport",
"Greenhouse gases"
] |
5,932 | https://en.wikipedia.org/wiki/Carbohydrate | A carbohydrate () is a biomolecule consisting of carbon (C), hydrogen (H) and oxygen (O) atoms, usually with a hydrogen–oxygen atom ratio of 2:1 (as in water) and thus with the empirical formula (where m may or may not be different from n), which does not mean the H has covalent bonds with O (for example with , H has a covalent bond with C but not with O). However, not all carbohydrates conform to this precise stoichiometric definition (e.g., uronic acids, deoxy-sugars such as fucose), nor are all chemicals that do conform to this definition automatically classified as carbohydrates (e.g., formaldehyde and acetic acid).
The term is most common in biochemistry, where it is a synonym of saccharide (), a group that includes sugars, starch, and cellulose. The saccharides are divided into four chemical groups: monosaccharides, disaccharides, oligosaccharides, and polysaccharides. Monosaccharides and disaccharides, the smallest (lower molecular weight) carbohydrates, are commonly referred to as sugars. While the scientific nomenclature of carbohydrates is complex, the names of the monosaccharides and disaccharides very often end in the suffix -ose, which was originally taken from the word glucose (), and is used for almost all sugars (e.g., fructose (fruit sugar), sucrose (cane or beet sugar), ribose, lactose (milk sugar)).
Carbohydrates perform numerous roles in living organisms. Polysaccharides serve as an energy store (e.g., starch and glycogen) and as structural components (e.g., cellulose in plants and chitin in arthropods and fungi). The 5-carbon monosaccharide ribose is an important component of coenzymes (e.g., ATP, FAD and NAD) and the backbone of the genetic molecule known as RNA. The related deoxyribose is a component of DNA. Saccharides and their derivatives include many other important biomolecules that play key roles in the immune system, fertilization, preventing pathogenesis, blood clotting, and development.
Carbohydrates are central to nutrition and are found in a wide variety of natural and processed foods. Starch is a polysaccharide and is abundant in cereals (wheat, maize, rice), potatoes, and processed food based on cereal flour, such as bread, pizza or pasta. Sugars appear in human diet mainly as table sugar (sucrose, extracted from sugarcane or sugar beets), lactose (abundant in milk), glucose and fructose, both of which occur naturally in honey, many fruits, and some vegetables. Table sugar, milk, or honey is often added to drinks and many prepared foods such as jam, biscuits and cakes.
Cellulose, a polysaccharide found in the cell walls of all plants, is one of the main components of insoluble dietary fiber. Although it is not digestible by humans, cellulose and insoluble dietary fiber generally help maintain a healthy digestive system by facilitating bowel movements. Other polysaccharides contained in dietary fiber include resistant starch and inulin, which feed some bacteria in the microbiota of the large intestine, and are metabolized by these bacteria to yield short-chain fatty acids.
Terminology
In scientific literature, the term "carbohydrate" has many synonyms, like "sugar" (in the broad sense), "saccharide", "ose", "glucide", "hydrate of carbon" or "polyhydroxy compounds with aldehyde or ketone". Some of these terms, especially "carbohydrate" and "sugar", are also used with other meanings.
In food science and in many informal contexts, the term "carbohydrate" often means any food that is particularly rich in the complex carbohydrate starch (such as cereals, bread and pasta) or simple carbohydrates, such as sugar (found in candy, jams, and desserts). This informality is sometimes confusing since it confounds chemical structure and digestibility in humans.
The term "carbohydrate" (or "carbohydrate by difference") refers also to dietary fiber, which is a carbohydrate, but, unlike sugars and starches, fibers are not hydrolyzed by human digestive enzymes. Fiber generally contributes little food energy in humans, but is often included in the calculation of total food energy. The fermentation of soluble fibers by gut microflora can yield short-chain fatty acids, and soluble fiber is estimated to provide about 2 kcal/g.
History
The history of the discovery regarding carbohydrates dates back around 10,000 years ago in Papua New Guinea during the cultivation of sugarcane during the Neolithic agricultural revolution. The term "carbohydrate" was first proposed by German chemist Carl Schmidt (chemist) in 1844. In 1856, glycogen, a form of carbohydrate storage in animal livers, was discovered by French physiologist Claude Bernard.
Structure
Formerly the name "carbohydrate" was used in chemistry for any compound with the formula Cm (H2O)n. Following this definition, some chemists considered formaldehyde (CH2O) to be the simplest carbohydrate, while others claimed that title for glycolaldehyde. Today, the term is generally understood in the biochemistry sense, which excludes compounds with only one or two carbons and includes many biological carbohydrates which deviate from this formula. For example, while the above representative formulas would seem to capture the commonly known carbohydrates, ubiquitous and abundant carbohydrates often deviate from this. For example, carbohydrates often display chemical groups such as: N-acetyl (e.g., chitin), sulfate (e.g., glycosaminoglycans), carboxylic acid and deoxy modifications (e.g., fucose and sialic acid).
Natural saccharides are generally built of simple carbohydrates called monosaccharides with general formula (CH2O)n where n is three or more. A typical monosaccharide has the structure H–(CHOH)x(C=O)–(CHOH)y–H, that is, an aldehyde or ketone with many hydroxyl groups added, usually one on each carbon atom that is not part of the aldehyde or ketone functional group. Examples of monosaccharides are glucose, fructose, and glyceraldehydes. However, some biological substances commonly called "monosaccharides" do not conform to this formula (e.g., uronic acids and deoxy-sugars such as fucose) and there are many chemicals that do conform to this formula but are not considered to be monosaccharides (e.g., formaldehyde CH2O and inositol (CH2O)6).
The open-chain form of a monosaccharide often coexists with a closed ring form where the aldehyde/ketone carbonyl group carbon (C=O) and hydroxyl group (–OH) react forming a hemiacetal with a new C–O–C bridge.
Monosaccharides can be linked together into what are called polysaccharides (or oligosaccharides) in a large variety of ways. Many carbohydrates contain one or more modified monosaccharide units that have had one or more groups replaced or removed. For example, deoxyribose, a component of DNA, is a modified version of ribose; chitin is composed of repeating units of N-acetyl glucosamine, a nitrogen-containing form of glucose.
Division
Carbohydrates are polyhydroxy aldehydes, ketones, alcohols, acids, their simple derivatives and their polymers having linkages of the acetal type. They may be classified according to their degree of polymerization, and may be divided initially into three principal groups, namely sugars, oligosaccharides and polysaccharides.
Monosaccharides
Monosaccharides are the simplest carbohydrates in that they cannot be hydrolyzed to smaller carbohydrates. They are aldehydes or ketones with two or more hydroxyl groups. The general chemical formula of an unmodified monosaccharide is (C•H2O)n, literally a "carbon hydrate". Monosaccharides are important fuel molecules as well as building blocks for nucleic acids. The smallest monosaccharides, for which n=3, are dihydroxyacetone and D- and L-glyceraldehydes.
Classification of monosaccharides
The α and β anomers of glucose. Note the position of the hydroxyl group (red or green) on the anomeric carbon relative to the CH2OH group bound to carbon 5: they either have identical absolute configurations (R,R or S,S) (α), or opposite absolute configurations (R,S or S,R) (β).
Monosaccharides are classified according to three different characteristics: the placement of its carbonyl group, the number of carbon atoms it contains, and its chiral handedness. If the carbonyl group is an aldehyde, the monosaccharide is an aldose; if the carbonyl group is a ketone, the monosaccharide is a ketose. Monosaccharides with three carbon atoms are called trioses, those with four are called tetroses, five are called pentoses, six are hexoses, and so on. These two systems of classification are often combined. For example, glucose is an aldohexose (a six-carbon aldehyde), ribose is an aldopentose (a five-carbon aldehyde), and fructose is a ketohexose (a six-carbon ketone).
Each carbon atom bearing a hydroxyl group (-OH), with the exception of the first and last carbons, are asymmetric, making them stereo centers with two possible configurations each (R or S). Because of this asymmetry, a number of isomers may exist for any given monosaccharide formula. Using Le Bel-van't Hoff rule, the aldohexose D-glucose, for example, has the formula (C·H2O)6, of which four of its six carbons atoms are stereogenic, making D-glucose one of 24=16 possible stereoisomers. In the case of glyceraldehydes, an aldotriose, there is one pair of possible stereoisomers, which are enantiomers and epimers. 1, 3-dihydroxyacetone, the ketose corresponding to the aldose glyceraldehydes, is a symmetric molecule with no stereo centers. The assignment of D or L is made according to the orientation of the asymmetric carbon furthest from the carbonyl group: in a standard Fischer projection if the hydroxyl group is on the right the molecule is a D sugar, otherwise it is an L sugar. The "D-" and "L-" prefixes should not be confused with "d-" or "l-", which indicate the direction that the sugar rotates plane polarized light. This usage of "d-" and "l-" is no longer followed in carbohydrate chemistry.
Ring-straight chain isomerism
The aldehyde or ketone group of a straight-chain monosaccharide will react reversibly with a hydroxyl group on a different carbon atom to form a hemiacetal or hemiketal, forming a heterocyclic ring with an oxygen bridge between two carbon atoms. Rings with five and six atoms are called furanose and pyranose forms, respectively, and exist in equilibrium with the straight-chain form.
During the conversion from straight-chain form to the cyclic form, the carbon atom containing the carbonyl oxygen, called the anomeric carbon, becomes a stereogenic center with two possible configurations: The oxygen atom may take a position either above or below the plane of the ring. The resulting possible pair of stereoisomers is called anomers. In the α anomer, the -OH substituent on the anomeric carbon rests on the opposite side (trans) of the ring from the CH2OH side branch. The alternative form, in which the CH2OH substituent and the anomeric hydroxyl are on the same side (cis) of the plane of the ring, is called the β anomer.
Use in living organisms
Monosaccharides are the major fuel source for metabolism, and glucose is an energy-rich molecule utilized to generate ATP in almost all living organisms. Glucose is a high-energy substrate produced in plants through photosynthesis by combining energy-poor water and carbon dioxide in an endothermic reaction fueled by solar energy. When monosaccharides are not immediately needed, they are often converted to more space-efficient (i.e., less water-soluble) forms, often polysaccharides. In animals, glucose circulating the blood is a major metabolic substrate and is oxidized in the mitochondria to produce ATP for performing useful cellular work. In humans and other animals, serum glucose levels must be regulated carefully to maintain glucose within acceptable limits and prevent the deleterious effects of hypo- or hyperglycemia. Hormones such as insulin and glucagon serve to keep glucose levels in balance: insulin stimulates glucose uptake into the muscle and fat cells when glucose levels are high, whereas glucagon helps to raise glucose levels if they dip too low by stimulating hepatic glucose synthesis. In many animals, including humans, this storage form is glycogen, especially in liver and muscle cells. In plants, starch is used for the same purpose. The most abundant carbohydrate, cellulose, is a structural component of the cell wall of plants and many forms of algae. Ribose is a component of RNA. Deoxyribose is a component of DNA. Lyxose is a component of lyxoflavin found in the human heart. Ribulose and xylulose occur in the pentose phosphate pathway. Galactose, a component of milk sugar lactose, is found in galactolipids in plant cell membranes and in glycoproteins in many tissues. Mannose occurs in human metabolism, especially in the glycosylation of certain proteins. Fructose, or fruit sugar, is found in many plants and humans, it is metabolized in the liver, absorbed directly into the intestines during digestion, and found in semen. Trehalose, a major sugar of insects, is rapidly hydrolyzed into two glucose molecules to support continuous flight.
Disaccharides
Two joined monosaccharides are called a disaccharide, the simplest kind of polysaccharide. Examples include sucrose and lactose. They are composed of two monosaccharide units bound together by a covalent bond known as a glycosidic linkage formed via a dehydration reaction, resulting in the loss of a hydrogen atom from one monosaccharide and a hydroxyl group from the other. The formula of unmodified disaccharides is C12H22O11. Although there are numerous kinds of disaccharides, a handful of disaccharides are particularly notable.
Sucrose, pictured to the right, is the most abundant disaccharide, and the main form in which carbohydrates are transported in plants. It is composed of one D-glucose molecule and one D-fructose molecule. The systematic name for sucrose, O-α-D-glucopyranosyl-(1→2)-D-fructofuranoside, indicates four things:
Its monosaccharides: glucose and fructose
Their ring types: glucose is a pyranose and fructose is a furanose
How they are linked together: the oxygen on carbon number 1 (C1) of α-D-glucose is linked to the C2 of D-fructose.
The -oside suffix indicates that the anomeric carbon of both monosaccharides participates in the glycosidic bond.
Lactose, a disaccharide composed of one D-galactose molecule and one D-glucose molecule, occurs naturally in mammalian milk. The systematic name for lactose is O-β-D-galactopyranosyl-(1→4)-D-glucopyranose. Other notable disaccharides include maltose (two D-glucoses linked α-1,4) and cellobiose (two D-glucoses linked β-1,4). Disaccharides can be classified into two types: reducing and non-reducing disaccharides. If the functional group is present in bonding with another sugar unit, it is called a reducing disaccharide or biose.
Oligosaccharides and polysaccharides
Oligosaccharides
Oligosaccharides are saccharide polymers composed of three to ten units of monosaccharides, connected via glycosidic linkages, similar to disaccharides. They are usually linked to lipids or amino acids glycosic linkage with oxygen or nitrogen to form glygolipids and glycoproteins, though some, like the raffinose series and the fructooligosaccharides, do not. They have roles in cell recognition and cell adhesion.
Polysaccharides
Nutrition
Carbohydrate consumed in food yields 3.87 kilocalories of energy per gram for simple sugars, and 3.57 to 4.12 kilocalories per gram for complex carbohydrate in most other foods. Relatively high levels of carbohydrate are associated with processed foods or refined foods made from plants, including sweets, cookies and candy, table sugar, honey, soft drinks, breads and crackers, jams and fruit products, pastas and breakfast cereals. Refined carbohydrates from processed foods such as white bread or rice, soft drinks, and desserts are readily digestible, and many are known to have a high glycemic index, which reflects a rapid assimilation of glucose. By contrast, the digestion of whole, unprocessed, fiber-rich foods such as beans, peas, and whole grains produces a slower and steadier release of glucose and energy into the body. Animal-based foods generally have the lowest carbohydrate levels, although milk does contain a high proportion of lactose.
Organisms typically cannot metabolize all types of carbohydrate to yield energy. Glucose is a nearly universal and accessible source of energy. Many organisms also have the ability to metabolize other monosaccharides and disaccharides but glucose is often metabolized first. In Escherichia coli, for example, the lac operon will express enzymes for the digestion of lactose when it is present, but if both lactose and glucose are present, the lac operon is repressed, resulting in the glucose being used first (see: Diauxie). Polysaccharides are also common sources of energy. Many organisms can easily break down starches into glucose; most organisms, however, cannot metabolize cellulose or other polysaccharides such as chitin and arabinoxylans. These carbohydrate types can be metabolized by some bacteria and protists. Ruminants and termites, for example, use microorganisms to process cellulose, fermenting it to caloric short-chain fatty acids. Even though humans lack the enzymes to digest fiber, dietary fiber represents an important dietary element for humans. Fibers promote healthy digestion, help regulate postprandial glucose and insulin levels, reduce cholesterol levels, and promote satiety.
The Institute of Medicine recommends that American and Canadian adults get between 45 and 65% of dietary energy from whole-grain carbohydrates. The Food and Agriculture Organization and World Health Organization jointly recommend that national dietary guidelines set a goal of 55–75% of total energy from carbohydrates, but only 10% directly from sugars (their term for simple carbohydrates). A 2017 Cochrane Systematic Review concluded that there was insufficient evidence to support the claim that whole grain diets can affect cardiovascular disease.
Classification
The term complex carbohydrate was first used in the U.S. Senate Select Committee on Nutrition and Human Needs publication Dietary Goals for the United States (1977) where it was intended to distinguish sugars from other carbohydrates (which were perceived to be nutritionally superior). However, the report put "fruit, vegetables and whole-grains" in the complex carbohydrate column, despite the fact that these may contain sugars as well as polysaccharides. The standard usage, however, is to classify carbohydrates chemically: simple if they are sugars (monosaccharides and disaccharides) and complex if they are polysaccharides (or oligosaccharides). Carbohydrates are sometimes divided into "available carbohydrates", which are absorbed in the small intestine and "unavailable carbohydrates", which pass to the large intestine, where they are subject to fermentation by the gastrointestinal microbiota.
Glycemic index
The glycemic index (GI) and glycemic load concepts characterize the potential for carbohydrates in food to raise blood glucose compared to a reference food (generally pure glucose). Expressed numerically as GI, carbohydrate-containing foods can be grouped as high-GI (score more than 70), moderate-GI (56-69), or low-GI (less than 55) relative to pure glucose (GI=100). Consumption of carbohydrate-rich, high-GI foods causes an abrupt increase in blood glucose concentration that declines rapidly following the meal, whereas low-GI foods with lower carbohydrate content produces a lower blood glucose concentration that returns gradually after the meal.
Glycemic load is a measure relating the quality of carbohydrates in a food (low- vs. high-carbohydrate content – the GI) by the amount of carbohydrates in a single serving of that food.
Health effects of dietary carbohydrate restriction
Low-carbohydrate diets may miss the health advantages – such as increased intake of dietary fiber and phytochemicals – afforded by high-quality plant foods such as legumes and pulses, whole grains, fruits, and vegetables. A "meta-analysis, of moderate quality," included as adverse effects of the diet halitosis, headache and constipation.
Carbohydrate-restricted diets can be as effective as low-fat diets in helping achieve weight loss over the short term when overall calorie intake is reduced. An Endocrine Society scientific statement said that "when calorie intake is held constant [...] body-fat accumulation does not appear to be affected by even very pronounced changes in the amount of fat vs carbohydrate in the diet." In the long term, low-carbohydrate diets do not appear to confer a "metabolic advantage," and effective weight loss or maintenance depends on the level of calorie restriction, not the ratio of macronutrients in a diet. The reasoning of diet advocates that carbohydrates cause undue fat accumulation by increasing blood insulin levels, but a more balanced diet that restricts refined carbohydrates can also reduce serum glucose and insulin levels and may also suppress lipogenesis and promote fat oxidation. However, as far as energy expenditure itself is concerned, the claim that low-carbohydrate diets have a "metabolic advantage" is not supported by clinical evidence. Further, it is not clear how low-carbohydrate dieting affects cardiovascular health, although two reviews showed that carbohydrate restriction may improve lipid markers of cardiovascular disease risk.
Carbohydrate-restricted diets are no more effective than a conventional healthy diet in preventing the onset of type 2 diabetes, but for people with type 2 diabetes, they are a viable option for losing weight or helping with glycemic control. There is limited evidence to support routine use of low-carbohydrate dieting in managing type 1 diabetes. The American Diabetes Association recommends that people with diabetes should adopt a generally healthy diet, rather than a diet focused on carbohydrate or other macronutrients.
An extreme form of low-carbohydrate diet – the ketogenic diet – is established as a medical diet for treating epilepsy. Through celebrity endorsement during the early 21st century, it became a fad diet as a means of weight loss, but with risks of undesirable side effects, such as low energy levels and increased hunger, insomnia, nausea, and gastrointestinal discomfort. The British Dietetic Association named it one of the "top 5 worst celeb diets to avoid in 2018".
Sources
Most dietary carbohydrates contain glucose, either as their only building block (as in the polysaccharides starch and glycogen), or together with another monosaccharide (as in the hetero-polysaccharides sucrose and lactose). Unbound glucose is one of the main ingredients of honey. Glucose is extremely abundant and has been isolated from a variety of natural sources across the world, including male cones of the coniferous tree Wollemia nobilis in Rome, the roots of Ilex asprella plants in China, and straws from rice in California.
The carbohydrate value is calculated in the USDA database and does not always correspond to the sum of the sugars, the starch, and the "dietary fiber".
Metabolism
Carbohydrate metabolism is the series of biochemical processes responsible for the formation, breakdown and interconversion of carbohydrates in living organisms.
The most important carbohydrate is glucose, a simple sugar (monosaccharide) that is metabolized by nearly all known organisms. Glucose and other carbohydrates are part of a wide variety of metabolic pathways across species: plants synthesize carbohydrates from carbon dioxide and water by photosynthesis storing the absorbed energy internally, often in the form of starch or lipids. Plant components are consumed by animals and fungi, and used as fuel for cellular respiration. Oxidation of one gram of carbohydrate yields approximately 16 kJ (4 kcal) of energy, while the oxidation of one gram of lipids yields about 38 kJ (9 kcal). The human body stores between 300 and 500 g of carbohydrates depending on body weight, with the skeletal muscle contributing to a large portion of the storage. Energy obtained from metabolism (e.g., oxidation of glucose) is usually stored temporarily within cells in the form of ATP. Organisms capable of anaerobic and aerobic respiration metabolize glucose and oxygen (aerobic) to release energy, with carbon dioxide and water as byproducts.
Catabolism
Catabolism is the metabolic reaction which cells undergo to break down larger molecules, extracting energy. There are two major metabolic pathways of monosaccharide catabolism: glycolysis and the citric acid cycle.
In glycolysis, oligo- and polysaccharides are cleaved first to smaller monosaccharides by enzymes called glycoside hydrolases. The monosaccharide units can then enter into monosaccharide catabolism. A 2 ATP investment is required in the early steps of glycolysis to phosphorylate Glucose to Glucose 6-Phosphate (G6P) and Fructose 6-Phosphate (F6P) to Fructose 1,6-biphosphate (FBP), thereby pushing the reaction forward irreversibly. In some cases, as with humans, not all carbohydrate types are usable as the digestive and metabolic enzymes necessary are not present.
Carbohydrate chemistry
Carbohydrate chemistry is a large and economically important branch of organic chemistry. Some of the main organic reactions that involve carbohydrates are:
Amadori rearrangement
Carbohydrate acetalisation
Carbohydrate digestion
Cyanohydrin reaction
Koenigs–Knorr reaction
Lobry de Bruyn–Van Ekenstein transformation
Nef reaction
Wohl degradation
Tipson-Cohen reaction
Ferrier rearrangement
Ferrier II reaction
Chemical synthesis
Carbohydrate synthesis is a sub-field of organic chemistry concerned specifically with the generation of natural and unnatural carbohydrate structures. This can include the synthesis of monosaccharide residues or structures containing more than one monosaccharide, known as oligosaccharides. Selective formation of glycosidic linkages and selective reactions of hydroxyl groups are very important, and the usage of protecting groups is extensive.
Common reactions for glycosidic bond formation are as follows:
Chemical glycosylation
Fischer glycosidation
Koenigs-Knorr reaction
Crich beta-mannosylation
While some common protection methods are as below:
Carbohydrate acetalisation
Trimethylsilyl
Benzyl ether
p-Methoxybenzyl ether
See also
Bioplastic
Carbohydrate NMR
Gluconeogenesis – A process where glucose can be synthesized by non-carbohydrate sources.
Glycobiology
Glycogen
Glycoinformatics
Glycolipid
Glycome
Glycomics
Glycosyl
Macromolecule
Saccharic acid
References
Further reading
External links
Carbohydrates, including interactive models and animations (Requires MDL Chime)
IUPAC-IUBMB Joint Commission on Biochemical Nomenclature (JCBN): Carbohydrate Nomenclature
Carbohydrates detailed
Carbohydrates and Glycosylation – The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
Functional Glycomics Gateway, a collaboration between the Consortium for Functional Glycomics and Nature Publishing Group
Nutrition | Carbohydrate | [
"Chemistry"
] | 6,693 | [
"Organic compounds",
"Biomolecules by chemical classification",
"Carbohydrates",
"Carbohydrate chemistry"
] |
5,936 | https://en.wikipedia.org/wiki/Chemical%20thermodynamics | Chemical thermodynamics is the study of the interrelation of heat and work with chemical reactions or with physical changes of state within the confines of the laws of thermodynamics. Chemical thermodynamics involves not only laboratory measurements of various thermodynamic properties, but also the application of mathematical methods to the study of chemical questions and the spontaneity of processes.
The structure of chemical thermodynamics is based on the first two laws of thermodynamics. Starting from the first and second laws of thermodynamics, four equations called the "fundamental equations of Gibbs" can be derived. From these four, a multitude of equations, relating the thermodynamic properties of the thermodynamic system can be derived using relatively simple mathematics. This outlines the mathematical framework of chemical thermodynamics.
History
In 1865, the German physicist Rudolf Clausius, in his Mechanical Theory of Heat, suggested that the principles of thermochemistry, e.g. the heat evolved in combustion reactions, could be applied to the principles of thermodynamics. Building on the work of Clausius, between the years 1873-76 the American mathematical physicist Willard Gibbs published a series of three papers, the most famous one being the paper On the Equilibrium of Heterogeneous Substances. In these papers, Gibbs showed how the first two laws of thermodynamics could be measured graphically and mathematically to determine both the thermodynamic equilibrium of chemical reactions as well as their tendencies to occur or proceed. Gibbs’ collection of papers provided the first unified body of thermodynamic theorems from the principles developed by others, such as Clausius and Sadi Carnot.
During the early 20th century, two major publications successfully applied the principles developed by Gibbs to chemical processes and thus established the foundation of the science of chemical thermodynamics. The first was the 1923 textbook Thermodynamics and the Free Energy of Chemical Substances by Gilbert N. Lewis and Merle Randall. This book was responsible for supplanting the chemical affinity with the term free energy in the English-speaking world. The second was the 1933 book Modern Thermodynamics by the methods of Willard Gibbs written by E. A. Guggenheim. In this manner, Lewis, Randall, and Guggenheim are considered as the founders of modern chemical thermodynamics because of the major contribution of these two books in unifying the application of thermodynamics to chemistry.
Overview
The primary objective of chemical thermodynamics is the establishment of a criterion for determination of the feasibility or spontaneity of a given transformation. In this manner, chemical thermodynamics is typically used to predict the energy exchanges that occur in the following processes:
Chemical reactions
Phase changes
The formation of solutions
The following state functions are of primary concern in chemical thermodynamics:
Internal energy (U)
Enthalpy (H)
Entropy (S)
Gibbs free energy (G)
Most identities in chemical thermodynamics arise from application of the first and second laws of thermodynamics, particularly the law of conservation of energy, to these state functions.
The three laws of thermodynamics (global, unspecific forms):
1. The energy of the universe is constant.
2. In any spontaneous process, there is always an increase in entropy of the universe.
3. The entropy of a perfect crystal (well ordered) at 0 Kelvin is zero.
Chemical energy
Chemical energy is the energy that can be released when chemical substances undergo a transformation through a chemical reaction. Breaking and making chemical bonds involves energy release or uptake, often as heat that may be either absorbed by or evolved from the chemical system.
Energy released (or absorbed) because of a reaction between chemical substances ("reactants") is equal to the difference between the energy content of the products and the reactants. This change in energy is called the change in internal energy of a chemical system. It can be calculated from , the internal energy of formation of the reactant molecules related to the bond energies of the molecules under consideration, and , the internal energy of formation of the product molecules. The change in internal energy is equal to the heat change if it is measured under conditions of constant volume (at STP condition), as in a closed rigid container such as a bomb calorimeter. However, at constant pressure, as in reactions in vessels open to the atmosphere, the measured heat is usually not equal to the internal energy change, because pressure-volume work also releases or absorbs energy. (The heat change at constant pressure is called the enthalpy change; in this case the widely tabulated enthalpies of formation are used.)
A related term is the heat of combustion, which is the chemical energy released due to a combustion reaction and of interest in the study of fuels. Food is similar to hydrocarbon and carbohydrate fuels, and when it is oxidized, its energy release is similar (though assessed differently than for a hydrocarbon fuel — see food energy).
In chemical thermodynamics, the term used for the chemical potential energy is chemical potential, and sometimes the Gibbs-Duhem equation is used.
Chemical reactions
In most cases of interest in chemical thermodynamics there are internal degrees of freedom and processes, such as chemical reactions and phase transitions, which create entropy in the universe unless they are at equilibrium or are maintained at a "running equilibrium" through "quasi-static" changes by being coupled to constraining devices, such as pistons or electrodes, to deliver and receive external work. Even for homogeneous "bulk" systems, the free-energy functions depend on the composition, as do all the extensive thermodynamic potentials, including the internal energy. If the quantities { Ni }, the number of chemical species, are omitted from the formulae, it is impossible to describe compositional changes.
Gibbs function or Gibbs Energy
For an unstructured, homogeneous "bulk" system, there are still various extensive compositional variables { Ni } that G depends on, which specify the composition (the amounts of each chemical substance, expressed as the numbers of molecules present or the numbers of moles). Explicitly,
For the case where only PV work is possible,
a restatement of the fundamental thermodynamic relation, in which μi is the chemical potential for the i-th component in the system
The expression for dG is especially useful at constant T and P, conditions, which are easy to achieve experimentally and which approximate the conditions in living creatures
Chemical affinity
While this formulation is mathematically defensible, it is not particularly transparent since one does not simply add or remove molecules from a system. There is always a process involved in changing the composition; e.g., a chemical reaction (or many), or movement of molecules from one phase (liquid) to another (gas or solid). We should find a notation which does not seem to imply that the amounts of the components ( Ni ) can be changed independently. All real processes obey conservation of mass, and in addition, conservation of the numbers of atoms of each kind.
Consequently, we introduce an explicit variable to represent the degree of advancement of a process, a progress variable ξ for the extent of reaction (Prigogine & Defay, p. 18; Prigogine, pp. 4–7; Guggenheim, p. 37.62), and to the use of the partial derivative ∂G/∂ξ (in place of the widely used "ΔG", since the quantity at issue is not a finite change). The result is an understandable expression for the dependence of dG on chemical reactions (or other processes). If there is just one reaction
If we introduce the stoichiometric coefficient for the i-th component in the reaction
(negative for reactants), which tells how many molecules of i are produced or consumed, we obtain an algebraic expression for the partial derivative
where we introduce a concise and historical name for this quantity, the "affinity", symbolized by A, as introduced by Théophile de Donder in 1923.(De Donder; Progogine & Defay, p. 69; Guggenheim, pp. 37, 240) The minus sign ensures that in a spontaneous change, when the change in the Gibbs free energy of the process is negative, the chemical species have a positive affinity for each other. The differential of G takes on a simple form that displays its dependence on composition change
If there are a number of chemical reactions going on simultaneously, as is usually the case,
with a set of reaction coordinates { ξj }, avoiding the notion that the amounts of the components ( Ni ) can be changed independently. The expressions above are equal to zero at thermodynamic equilibrium, while they are negative when chemical reactions proceed at a finite rate, producing entropy. This can be made even more explicit by introducing the reaction rates dξj/dt. For every physically independent process (Prigogine & Defay, p. 38; Prigogine, p. 24)
This is a remarkable result since the chemical potentials are intensive system variables, depending only on the local molecular milieu. They cannot "know" whether temperature and pressure (or any other system variables) are going to be held constant over time. It is a purely local criterion and must hold regardless of any such constraints. Of course, it could have been obtained by taking partial derivatives of any of the other fundamental state functions, but nonetheless is a general criterion for (−T times) the entropy production from that spontaneous process; or at least any part of it that is not captured as external work. (See Constraints below.)
We now relax the requirement of a homogeneous "bulk" system by letting the chemical potentials and the affinity apply to any locality in which a chemical reaction (or any other process) is occurring. By accounting for the entropy production due to irreversible processes, the equality for dG is now replaced by
or
Any decrease in the Gibbs function of a system is the upper limit for any isothermal, isobaric work that can be captured in the surroundings, or it may simply be dissipated, appearing as T times a corresponding increase in the entropy of the system and its surrounding. Or it may go partly toward doing external work and partly toward creating entropy. The important point is that the extent of reaction for a chemical reaction may be coupled to the displacement of some external mechanical or electrical quantity in such a way that one can advance only if the other also does. The coupling may occasionally be rigid, but it is often flexible and variable.
Solutions
In solution chemistry and biochemistry, the Gibbs free energy decrease (∂G/∂ξ, in molar units, denoted cryptically by ΔG) is commonly used as a surrogate for (−T times) the global entropy produced by spontaneous chemical reactions in situations where no work is being done; or at least no "useful" work; i.e., other than perhaps ± P dV. The assertion that all spontaneous reactions have a negative ΔG is merely a restatement of the second law of thermodynamics, giving it the physical dimensions of energy and somewhat obscuring its significance in terms of entropy. When no useful work is being done, it would be less misleading to use the Legendre transforms of the entropy appropriate for constant T, or for constant T and P, the Massieu functions −F/T and −G/T, respectively.
Non-equilibrium
Generally the systems treated with the conventional chemical thermodynamics are either at equilibrium or near equilibrium. Ilya Prigogine developed the thermodynamic treatment of open systems that are far from equilibrium. In doing so he has discovered phenomena and structures of completely new and completely unexpected types. His generalized, nonlinear and irreversible thermodynamics has found surprising applications in a wide variety of fields.
The non-equilibrium thermodynamics has been applied for explaining how ordered structures e.g. the biological systems, can develop from disorder. Even if Onsager's relations are utilized, the classical principles of equilibrium in thermodynamics still show that linear systems close to equilibrium always develop into states of disorder which are stable to perturbations and cannot explain the occurrence of ordered structures.
Prigogine called these systems dissipative systems, because they are formed and maintained by the dissipative processes which take place because of the exchange of energy between the system and its environment and because they disappear if that exchange ceases. They may be said to live in symbiosis with their environment.
The method which Prigogine used to study the stability of the dissipative structures to perturbations is of very great general interest. It makes it possible to study the most varied problems, such as city traffic problems, the stability of insect communities, the development of ordered biological structures and the growth of cancer cells to mention but a few examples.
System constraints
In this regard, it is crucial to understand the role of walls and other constraints, and the distinction between independent processes and coupling. Contrary to the clear implications of many reference sources, the previous analysis is not restricted to homogeneous, isotropic bulk systems which can deliver only PdV work to the outside world, but applies even to the most structured systems. There are complex systems with many chemical "reactions" going on at the same time, some of which are really only parts of the same, overall process. An independent process is one that could proceed even if all others were unaccountably stopped in their tracks. Understanding this is perhaps a "thought experiment" in chemical kinetics, but actual examples exist.
A gas-phase reaction at constant temperature and pressure which results in an increase in the number of molecules will lead to an increase in volume. Inside a cylinder closed with a piston, it can proceed only by doing work on the piston. The extent variable for the reaction can increase only if the piston moves out, and conversely if the piston is pushed inward, the reaction is driven backwards.
Similarly, a redox reaction might occur in an electrochemical cell with the passage of current through a wire connecting the electrodes. The half-cell reactions at the electrodes are constrained if no current is allowed to flow. The current might be dissipated as Joule heating, or it might in turn run an electrical device like a motor doing mechanical work. An automobile lead-acid battery can be recharged, driving the chemical reaction backwards. In this case as well, the reaction is not an independent process. Some, perhaps most, of the Gibbs free energy of reaction may be delivered as external work.
The hydrolysis of ATP to ADP and phosphate can drive the force-times-distance work delivered by living muscles, and synthesis of ATP is in turn driven by a redox chain in mitochondria and chloroplasts, which involves the transport of ions across the membranes of these cellular organelles. The coupling of processes here, and in the previous examples, is often not complete. Gas can leak slowly past a piston, just as it can slowly leak out of a rubber balloon. Some reaction may occur in a battery even if no external current is flowing. There is usually a coupling coefficient, which may depend on relative rates, which determines what percentage of the driving free energy is turned into external work, or captured as "chemical work", a misnomer for the free energy of another chemical process.
See also
Thermodynamic databases for pure substances
laws of thermodynamics
References
Further reading
Library of Congress Catalog No. 60-5597
Library of Congress Catalog No. 67-29540
Library of Congress Catalog No. 67-20003
External links
Chemical Thermodynamics - University of North Carolina
Chemical energetics (Introduction to thermodynamics and the First Law)
Thermodynamics of chemical equilibrium (Entropy, Second Law and free energy)
Physical chemistry
Branches of thermodynamics
Chemical engineering thermodynamics | Chemical thermodynamics | [
"Physics",
"Chemistry",
"Engineering"
] | 3,312 | [
"Applied and interdisciplinary physics",
"Chemical engineering",
"Thermodynamics",
"nan",
"Chemical engineering thermodynamics",
"Chemical thermodynamics",
"Branches of thermodynamics",
"Physical chemistry"
] |
5,946 | https://en.wikipedia.org/wiki/Casuistry | Casuistry ( ) is a process of reasoning that seeks to resolve moral problems by extracting or extending abstract rules from a particular case, and reapplying those rules to new instances. This method occurs in applied ethics and jurisprudence. The term is also used pejoratively to criticise the use of clever but unsound reasoning, especially in relation to ethical questions (as in sophistry). It has been defined as follows:
Study of cases of conscience and a method of solving conflicts of obligations by applying general principles of ethics, religion, and moral theology to particular and concrete cases of human conduct. This frequently demands an extensive knowledge of natural law and equity, civil law, ecclesiastical precepts, and an exceptional skill in interpreting these various norms of conduct....
It remains a common method in applied ethics.
Etymology
According to the Online Etymological Dictionary, the term and its agent noun "casuist", appearing from about 1600, derive from the Latin noun , meaning "case", especially as referring to a "case of conscience". The same source says, "Even in the earliest printed uses the sense was pejorative".
History
Casuistry dates from Aristotle (384–322 BC), yet the peak of casuistry was from 1550 to 1650, when the Society of Jesus (commonly known as the Jesuits) used case-based reasoning, particularly in administering the Sacrament of Penance (or "confession"). The term became pejorative following Blaise Pascal's attack on the misuse of the method in his Provincial Letters (1656–57). The French mathematician, religious philosopher and Jansenist sympathiser attacked priests who used casuistic reasoning in confession to pacify wealthy church donors. Pascal charged that "remorseful" aristocrats could confess a sin one day, re-commit it the next, then generously donate to the church and return to re-confess their sin, confident that they were being assigned a penance in name only. These criticisms darkened casuistry's reputation in the following centuries. For example, the Oxford English Dictionary quotes a 1738 essay by Henry St. John, 1st Viscount Bolingbroke to the effect that casuistry "destroys, by distinctions and exceptions, all morality, and effaces the essential difference between right and wrong, good and evil".
The 20th century saw a revival of interest in casuistry. In their book The Abuse of Casuistry: A History of Moral Reasoning (1988), Albert Jonsen and Stephen Toulmin argue that it is not casuistry but its abuse that has been a problem; that, properly used, casuistry is powerful reasoning. Jonsen and Toulmin offer casuistry as a method for compromising the contradictory principles of moral absolutism and moral relativism. In addition, the ethical philosophies of utilitarianism (especially preference utilitarianism) and pragmatism have been identified as employing casuistic reasoning.
Early modernity
The casuistic method was popular among Catholic thinkers in the early modern period. Casuistic authors include Antonio Escobar y Mendoza, whose Summula casuum conscientiae (1627) enjoyed great success, Thomas Sanchez, Vincenzo Filliucci (Jesuit and penitentiary at St Peter's), Antonino Diana, Paul Laymann (Theologia Moralis, 1625), John Azor (Institutiones Morales, 1600), Etienne Bauny, Louis Cellot, Valerius Reginaldus, and Hermann Busembaum (d. 1668).
The progress of casuistry was interrupted toward the middle of the 17th century by the controversy which arose concerning the doctrine of probabilism, which effectively stated that one could choose to follow a "probable opinion"that is, an opinion supported by a theologian or anothereven if it contradicted a more probable opinion or a quotation from one of the Fathers of the Church.
Certain kinds of casuistry were criticised by early Protestant theologians, because it was used to justify many of the abuses that they sought to reform. It was famously attacked by the Catholic and Jansenist philosopher Blaise Pascal during the formulary controversy against the Jesuits, in his Provincial Letters, as the use of rhetorics to justify moral laxity, which became identified by the public with Jesuitism; hence the everyday use of the term to mean complex and sophistic reasoning to justify moral laxity. By the mid-18th century, "casuistry" had become a synonym for attractive-sounding, but ultimately false, moral reasoning.
In 1679 Pope Innocent XI publicly condemned sixty-five of the more radical propositions (stricti mentalis), taken chiefly from the writings of Escobar, Suarez and other casuists as propositiones laxorum moralistarum and forbade anyone to teach them under penalty of excommunication. Despite this condemnation by a pope, both Catholicism and Protestantism permit the use of ambiguous statements in specific circumstances.
Later modernity
G. E. Moore dealt with casuistry in chapter 1.4 of his Principia Ethica, in which he claimed that "the defects of casuistry are not defects of principle; no objection can be taken to its aim and object. It has failed only because it is far too difficult a subject to be treated adequately in our present state of knowledge". Furthermore, he asserted that "casuistry is the goal of ethical investigation. It cannot be safely attempted at the beginning of our studies, but only at the end".
Since the 1960s, applied ethics has revived the ideas of casuistry in applying moral reasoning to particular cases in law, bioethics, and business ethics. Its facility for dealing with situations where rules or values conflict with each other has made it a useful approach in professional ethics, and casuistry's reputation has improved somewhat as a result.
Pope Francis, a Jesuit, has criticized casuistry as "the practice of setting general laws on the basis of exceptional cases" in instances where a more holistic approach would be preferred.
See also
References
Further reading
Bliton, Mark J. (1993). The Ethics of Clinical Ethics Consultation: On the Way to Clinical Philosophy (Diss. Vanderbilt)
Carney, Bridget Mary. (1993). Modern Casuistry: An Essential But Incomplete Method for Clinical Ethical Decision-Making. (Diss., Graduate Theological Union).
Carson, Ronald A. (1988). "Paul Ramsey, Principled Protestant Casuist: A Retrospective." Medical Humanities Review, Vol. 2, pp. 24–35.
Chidwick, Paula Marjorie (1994). Approaches to Clinical Ethical Decision-Making: Ethical Theory, Casuistry and Consultation. (Diss., U of Guelph)
Drane, J.F. (1990). "Methodologies for Clinical Ethics." Bulletin of the Pan American Health Organization, Vol. 24, pp. 394–404.
Dworkin, R.B. (1994). "Emerging Paradigms in Bioethics: Symposium." Indiana Law Journal, Vol. 69, pp. 945–1122.
Elliot, Carl (1992). "Solving the Doctor's Dilemma?" New Scientist, Vol. 133, pp. 42–43.
Emanuel, Ezekiel J. (1991). The Ends of Human Life: Medical Ethics in a Liberal Polity (Cambridge).
Franklin, James (2001). The Science of Conjecture: Evidence and Probability Before Pascal (Johns Hopkins), ch. 4.
Gallagher, Lowell (1991). Medusa's Gaze: Casuistry and Conscience in the Renaissance (Stanford)
Green, Bryan S. (1988). Literary Methods and Sociological Theory: Case Studies of Simmel and Weber (Albany)
Houle, Martha Marie (1983). The Fictions of Casuistry and Pascal's Jesuit in "Les Provinciales" (Diss. U California, San Diego)
Jonsen, Albert R. (1986). "Casuistry" in J.F. Childress and J. Macgvarrie, eds. Westminster Dictionary of Christian Ethics (Philadelphia)
Jonsen, Albert R. and Stephen Toulmin (1988). The Abuse of Casuistry: A History of Moral Reasoning (California).
Keenan, James F., S.J. and Thomas A. Shannon. (1995). The Context of Casuistry (Washington).
Kirk, K. (1936). Conscience and Its Problems, An Introduction to Casuistry (London)
Kuczewski, Mark G. (1994). Fragmentation and Consensus in Contemporary Neo-Aristotelian Ethics: A Study in Communitarianism and Casuistry (Diss., Duquesne U).
Long, Edward LeRoy, junior (1954). Conscience and Compromise: an Approach to Protestant Casuistry (Philadelphia, Penn.: Westminster Press)
Mackler, Aaron Leonard. Cases of Judgments in Ethical Reasoning: An Appraisal of Contemporary Casuistry and Holistic Model for the Mutual Support of Norms and Case Judgments (Diss., Georgetown U).
McCready, Amy R. (1992). "Milton's Casuistry: The Case of 'The Doctrine and Discipline of Divorce.' " Journal of Medieval and Renaissance Studies, Vol. 22, pp. 393–428.
Odozor, Paulinus Ikechukwu (1989). Richard A. McCormick and Casuistry: Moral Decision-Making in Conflict Situations (M.A. Thesis, St. Michael's College).
Pack, Rolland W. (1988). Case Studies and Moral Conclusions: The Philosophical Use of Case Studies in Biomedical Ethics (Diss., Georgetown U).
Pascal, Blaise (1967). The Provincial Letters (London).
Río Parra, Elena del (2008). Cartografías de la conciencia española en la Edad de Oro (Mexico).
Seiden, Melvin (1990). Measure for Measure: Casuistry and Artistry (Washington).
Smith, David H. (1991). "Stories, Values, and Patient Care Decisions." in Charles Conrad, ed. The Ethical Nexus: Values in Organizational Decision Making. (New Jersey).
Starr, G. (1971). Defoe and Casuistry (Princeton).
Tallmon, James Michael (2001). "Casuistry" in The Encyclopedia of Rhetoric. Ed. Thomas O. Sloane. New York: Oxford University Press, pp. 83–88.
Tallmon, James Michael (1993). Casuistry and the Quest for Rhetorical Reason: Conceptualizing a Method of Shared Moral Inquiry (Diss., U of Washington).
Taylor, Richard (1984). Good and Evil – A New Direction: A Foreceful Attack on the Rationalist Tradition in Ethics (Buffalo).
Toulmin, Stephen (1988). "The Recovery of Practical Philosophy." The American Scholar, Vol. 57, pp. 337–352.
Weinstein, Bruce David (1989). The Possibility of Ethical Expertise (Diss. Georgetown U).
Wildes, Kevin Wm., S.J. (1993). The View for Somewhere: Moral Judgment in Bioethics (Diss. Rice U).
Zacker, David J. (1991). Reflection and Particulars: Does Casuistry Offer Us Stable Beliefs About Ethics? (M.A. Thesis, Western Michigan U).
External links
Dictionary of the History of Ideas: "Casuistry"
Accountancy as computational casuistics, article on how modern compliance regimes in accountancy and law apply casuistry
Mortimer Adler's Great Ideas – Casuistry
Summary of casuistry by Jeramy Townsley
Casuistry – Online Guide to Ethics and Moral Philosophy
Casuistry – Oxford Encyclopedia of Rhetoric catalogued at she-philosopher.com
Scholasticism
Applied ethics
Common law
Legal reasoning
Jurisprudence
Criticism of religion | Casuistry | [
"Biology"
] | 2,489 | [
"Behavior",
"Human behavior",
"Applied ethics"
] |
5,948 | https://en.wikipedia.org/wiki/Chinese%20input%20method | Several input methods allow the use of Chinese characters with computers. Most allow selection of characters based either on their pronunciation or their graphical shape. Phonetic input methods are easier to learn but are less efficient, while graphical methods allow faster input, but have a steep learning curve.
Other methods allow users to write characters directly via touchscreens, such as those found on mobile phones and tablet computers.
History
Chinese input methods predate the computer. One of the early attempts was an electro-mechanical Chinese typewriter Ming kwai () which was invented by Lin Yutang, a prominent Chinese writer, in the 1940s. It assigned thirty base shapes or strokes to different keys and adopted a new way of categorizing Chinese characters. But the typewriter was not produced commercially and Lin soon found himself deeply in debt.
Before the 1980s, Chinese publishers hired teams of workers and selected a few thousand type pieces from an enormous Chinese character set. Chinese government agencies entered characters using a long, complicated list of Chinese telegraph codes, which assigned different numbers to each character. During the early computer era, Chinese characters were categorized by their radicals or Pinyin romanization, but results were less than satisfactory.
In the 1970s to 1980s, large keyboards with thousands of keys were used to input Chinese. Each key was mapped to several Chinese characters. To type a character, one pressed the character key and then a selection key. There were also experimental "radical keyboards" with dozens to several hundreds keys. Chinese characters were decomposed into "radicals", each of which was represented by a key. Unwieldy and difficult to use, these keyboards became obsolete after the introduction of Cangjie input method, the first method to use only the standard keyboard and make Chinese touch typing possible.
Chu Bong-Foo invented a common input method in 1976 with his Cangjie input method, which assigns different "roots" to each key on a standard computer keyboard. With this method, for example, the character is assigned to the A key, and 月 is assigned to B. Typing them together will result in the character ("bright").
Despite its steeper learning curve, this method remains popular in Chinese communities that use traditional Chinese characters, such as Hong Kong and Taiwan; the method allows very precise input, thus allowing users to type more efficiently and quickly, provided they are familiar with the fairly complicated rules of the method. It was the first method that allowed users to enter more than a hundred Chinese characters per minute. Its popularity is also helped by its omnipresence on traditional Chinese computer systems, since Chu has given up its patent in 1982, stating that it should be part of the cultural asset. Developers of Chinese systems can adopt it freely, and users do not have the hassle of it being absent on devices with Chinese support. Cangjie input programs supporting a large CJK character set have been developed.
All methods have their strengths and weaknesses. The pinyin method can be learned rapidly but its maximum input rate is limited. The Wubi method takes longer to learn, but expert typists can enter text much more rapidly with it than with phonetic methods. However, Wubi is proprietary, and a version of it has become freely available only after its inventor lost a patent lawsuit in 1997.
Due to these complexities, there is no "standard" method.
In mainland China, pinyin methods such as Sogou Pinyin and Google Pinyin are the most popular. In Taiwan, use of Cangjie, Dayi, Boshiamy, and bopomofo predominate; and in Hong Kong and Macau, the Cangjie is most often taught in schools, while a few schools teach CKC Chinese Input System.
Other methods include handwriting recognition, OCR and speech recognition. The computer itself must first be "trained" before the first or second of these methods are used; that is, the new user enters the system in a special "learning mode" so that the system can learn to identify their handwriting or speech patterns. The latter two methods are used less frequently than keyboard-based input methods and suffer from relatively high error rates, especially when used without proper "training", though higher error rates are an acceptable trade-off to many users.
Categories
Phonetic-based
The user enters pronunciations that are converted into relevant Chinese characters. The user must select the desired character from homophones, which are common in Chinese. Modern systems, such as Sogou Pinyin and Google Pinyin, predict the desired characters based on context and user preferences. For example, if one enters the sounds jicheng, the software will type (to inherit), but if jichengche is entered, (taxi) will appear.
Various Chinese dialects complicate the system. Phonetic methods are mainly based on standard pinyin, Zhuyin/Bopomofo, and Jyutping in China, Taiwan, and Hong Kong, respectively. Input methods based on other varieties of Chinese, like Hakka or Minnan, also exist.
While the phonetic system is easy to learn, choosing appropriate Chinese characters slows typing speed. Most users report a typing speed of fifty characters per minute, though some reach over one hundred per minute. With some phonetic IMEs (Input Method Editors), in addition to predictive input based on previous conversions, it is possible for users to create custom dictionary entries for frequently used characters and phrases, potentially lowering the number of characters required to evoke it.
Shuangpin
Shuangpin (; ), literally dual spell, is a stenographical phonetic input method based on hanyu pinyin that reduces the number of keystrokes for one Chinese character to two by distributing every vowel and consonant composed of more than one letter to a specific key. In most Shuangpin layout schemes such as Xiaohe, Microsoft 2003 and Ziranma, the most frequently used vowels are placed on the middle layer, reducing the risk of repetitive strain injury.
Shuangpin is supported by a large number of pinyin input software including QQ, Microsoft Bing Pinyin, Sogou Pinyin and Google Pinyin.
Shape-based
Cangjie input method
Simplified Cangjie
Dayi method
Array input method ()
Four-corner method
Stroke count method
Wubi method
Zhengma method
Biaoxingma method
ZYQ method ()
Others
Chinese telegraph code ()
Examples of keyboard layouts
Software
Microsoft IME
Sogou Pinyin
Google Pinyin
See also
List of input methods for Unix platforms
List of CJK fonts
Chinese language and computers
Japanese language and computers
Japanese input methods
Korean language and computers
Vietnamese language and computers
Han unification
Character amnesia
Chinese character encodings:
Big5
Guobiao code (GB)
Unicode
Telegraph code
Chinese character IT
Notes
External links
What Does a Chinese Keyboard Look Like?, article by Slate.com
Overview of Input Methods, by Sebastien Bruggeman.
中文輸入法世界 Chinese input method news.
The engineering daring that led to the first Chinese personal computer. With 1,000s of Chinese characters and limited memory, inventors of the Sinotype III had to push the limits of early machines. by Tom Mullaney, June 29, 2021, techcrunch.com
How intensive modding ushered in China’s computer revolution: Early Chinese engineers needed to constantly push against the boundaries of 'alphabetic order,'by Tom Mullaney, October 24, 2021, techcrunch.com
The computer pioneer who built modern China, By Leila McNeill, 19 February 2020, bbc website.
Articles containing video clips
CJK input methods
Chinese-language computing | Chinese input method | [
"Technology"
] | 1,531 | [
"Input methods",
"Natural language and computing"
] |
5,961 | https://en.wikipedia.org/wiki/Cognitive%20psychology | Cognitive psychology is the scientific study of human mental processes such as attention, language use, memory, perception, problem solving, creativity, and reasoning.
Cognitive psychology originated in the 1960s in a break from behaviorism, which held from the 1920s to 1950s that unobservable mental processes were outside the realm of empirical science. This break came as researchers in linguistics and cybernetics, as well as applied psychology, used models of mental processing to explain human behavior.
Work derived from cognitive psychology was integrated into other branches of psychology and various other modern disciplines like cognitive science, linguistics, and economics.
History
Philosophically, ruminations on the human mind and its processes have been around since the times of the ancient Greeks. In 387 BCE, Plato had suggested that the brain was the seat of the mental processes. In 1637, René Descartes posited that humans are born with innate ideas and forwarded the idea of mind-body dualism, which would come to be known as substance dualism (essentially the idea that the mind and the body are two separate substances). From that time, major debates ensued through the 19th century regarding whether human thought was solely experiential (empiricism), or included innate knowledge (nativism). Some of those involved in this debate included George Berkeley and John Locke on the side of empiricism, and Immanuel Kant on the side of nativism.
With the philosophical debate continuing, the mid to late 19th century was a critical time in the development of psychology as a scientific discipline. Two discoveries that would later play substantial roles in cognitive psychology were Paul Broca's discovery of the area of the brain largely responsible for language production, and Carl Wernicke's discovery of an area thought to be mostly responsible for comprehension of language. Both areas were subsequently formally named for their founders, and disruptions of an individual's language production or comprehension due to trauma or malformation in these areas have come to commonly be known as Broca's aphasia and Wernicke's aphasia.
From the 1920s to the 1950s, the main approach to psychology was behaviorism. Initially, its adherents viewed mental events such as thoughts, ideas, attention, and consciousness as unobservable, hence outside the realm of a science of psychology. One early pioneer of cognitive psychology, whose work predated much of behaviorist literature, was Carl Jung. Jung introduced the hypothesis of cognitive functions in his 1921 book Psychological Types. Another pioneer of cognitive psychology, who worked outside the boundaries (both intellectual and geographical) of behaviorism, was Jean Piaget. From 1926 to the 1950s and into the 1980s, he studied the thoughts, language, and intelligence of children and adults.
In the mid-20th century, four main influences arose that would inspire and shape cognitive psychology as a formal school of thought:
With the development of new warfare technology during WWII, the need for a greater understanding of human performance came to prominence. Problems such as how to best train soldiers to use new technology and how to deal with matters of attention while under duress became areas of need for military personnel. Behaviorism provided little if any insight into these matters and it was the work of Donald Broadbent, integrating concepts from human performance research and the recently developed information theory, that forged the way in this area.
Developments in computer science would lead to parallels being drawn between human thought and the computational functionality of computers, opening entirely new areas of psychological thought. Allen Newell and Herbert Simon spent years developing the concept of artificial intelligence (AI) and later worked with cognitive psychologists regarding the implications of AI. This encouraged a conceptualization of mental functions patterned on the way that computers handled such things as memory storage and retrieval, and it opened an important doorway for cognitivism.
Noam Chomsky's 1959 critique of behaviorism, and empiricism more generally, initiated what would come to be known as the "cognitive revolution". Inside psychology, in criticism of behaviorism, J. S. Bruner, J. J. Goodnow & G. A. Austin wrote "a study of thinking" in 1956. In 1960, G. A. Miller, E. Galanter and K. Pribram wrote their famous "Plans and the Structure of Behavior". The same year, Bruner and Miller founded the Harvard Center for Cognitive Studies, which institutionalized the revolution and launched the field of cognitive science.
Formal recognition of the field involved the establishment of research institutions such as George Mandler's Center for Human Information Processing in 1964. Mandler described the origins of cognitive psychology in a 2002 article in the Journal of the History of the Behavioral Sciences.
Ulric Neisser put the term "cognitive psychology" into common use through his book Cognitive Psychology, published in 1967. Neisser's definition of "cognition" illustrates the then-progressive concept of cognitive processes:
The term "cognition" refers to all processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used. It is concerned with these processes even when they operate in the absence of relevant stimulation, as in images and hallucinations. ... Given such a sweeping definition, it is apparent that cognition is involved in everything a human being might possibly do; that every psychological phenomenon is a cognitive phenomenon. But although cognitive psychology is concerned with all human activity rather than some fraction of it, the concern is from a particular point of view. Other viewpoints are equally legitimate and necessary. Dynamic psychology, which begins with motives rather than with sensory input, is a case in point. Instead of asking how a man's actions and experiences result from what he saw, remembered, or believed, the dynamic psychologist asks how they follow from the subject's goals, needs, or instincts.
Cognitive processes
The main focus of cognitive psychologists is on the mental processes that affect behavior. Those processes include, but are not limited to, the following three stages of memory:
Sensory memory storage: holds sensory information
Short-term memory storage: holds information temporarily for analysis and retrieves information from the Long-term memory.
Long-term memory: holds information over an extended period of time which receives information from the short-term memory.
Attention
The psychological definition of attention is "a state of focused awareness on a subset of the available sensation perception information". A key function of attention is to identify irrelevant data and filter it out, enabling significant data to be distributed to the other mental processes. For example, the human brain may simultaneously receive auditory, visual, olfactory, taste, and tactile information. The brain is able to consciously handle only a small subset of this information, and this is accomplished through the attentional processes.
Attention can be divided into two major attentional systems: exogenous control and endogenous control. Exogenous control works in a bottom-up manner and is responsible for orienting reflex, and pop-out effects. Endogenous control works top-down and is the more deliberate attentional system, responsible for divided attention and conscious processing.
One major focal point relating to attention within the field of cognitive psychology is the concept of divided attention. A number of early studies dealt with the ability of a person wearing headphones to discern meaningful conversation when presented with different messages into each ear; this is known as the dichotic listening task. Key findings involved an increased understanding of the mind's ability to both focus on one message, while still being somewhat aware of information being taken in from the ear not being consciously attended to. For example, participants (wearing earphones) may be told that they will be hearing separate messages in each ear and that they are expected to attend only to information related to basketball. When the experiment starts, the message about basketball will be presented to the left ear and non-relevant information will be presented to the right ear. At some point the message related to basketball will switch to the right ear and the non-relevant information to the left ear. When this happens, the listener is usually able to repeat the entire message at the end, having attended to the left or right ear only when it was appropriate. The ability to attend to one conversation in the face of many is known as the cocktail party effect.
Other major findings include that participants cannot comprehend both passages when shadowing one passage, they cannot report the content of the unattended message, while they can shadow a message better if the pitches in each ear are different. However, while deep processing does not occur, early sensory processing does. Subjects did notice if the pitch of the unattended message changed or if it ceased altogether, and some even oriented to the unattended message if their name was mentioned.
Memory
The two main types of memory are short-term memory and long-term memory; however, short-term memory has become better understood to be working memory. Cognitive psychologists often study memory in terms of working memory.
Working memory
Though working memory is often thought of as just short-term memory, it is more clearly defined as the ability to process and maintain temporary information in a wide range of everyday activities in the face of distraction. The famously known capacity of memory of 7 plus or minus 2 is a combination of both memories in working memory and long-term memory.
One of the classic experiments is by Ebbinghaus, who found the serial position effect where information from the beginning and end of the list of random words were better recalled than those in the center. This primacy and recency effect varies in intensity based on list length. Its typical U-shaped curve can be disrupted by an attention-grabbing word; this is known as the Von Restorff effect.
Many models of working memory have been made. One of the most regarded is the Baddeley and Hitch model of working memory. It takes into account both visual and auditory stimuli, long-term memory to use as a reference, and a central processor to combine and understand it all.
A large part of memory is forgetting, and there is a large debate among psychologists of decay theory versus interference theory.
Long-term memory
Modern conceptions of memory are usually about long-term memory and break it down into three main sub-classes. These three classes are somewhat hierarchical in nature, in terms of the level of conscious thought related to their use.
Procedural memory is memory for the performance of particular types of action. It is often activated on a subconscious level, or at most requires a minimal amount of conscious effort. Procedural memory includes stimulus-response-type information, which is activated through association with particular tasks, routines, etc. A person is using procedural knowledge when they seemingly "automatically" respond in a particular manner to a particular situation or process. An example is driving a car.
Semantic memory is the encyclopedic knowledge that a person possesses. Knowledge like what the Eiffel Tower looks like, or the name of a friend from sixth grade, represent semantic memory. Access of semantic memory ranges from slightly to extremely effortful, depending on a number of variables including but not limited to recency of encoding of the information, number of associations it has to other information, frequency of access, and levels of meaning (how deeply it was processed when it was encoded).
Episodic memory is the memory of autobiographical events that can be explicitly stated. It contains all memories that are temporal in nature, such as when one last brushed one's teeth or where one was when one heard about a major news event. Episodic memory typically requires the deepest level of conscious thought, as it often pulls together semantic memory and temporal information to formulate the entire memory.
Perception
Perception involves both the physical senses (sight, smell, hearing, taste, touch, and proprioception) as well as the cognitive processes involved in interpreting those senses. Essentially, it is how people come to understand the world around them through the interpretation of stimuli. Early psychologists like Edward B. Titchener began to work with perception in their structuralist approach to psychology. Structuralism dealt heavily with trying to reduce human thought (or "consciousness", as Titchener would have called it) into its most basic elements by gaining an understanding of how an individual perceives particular stimuli.
Current perspectives on perception within cognitive psychology tend to focus on particular ways in which the human mind interprets stimuli from the senses and how these interpretations affect behavior. An example of the way in which modern psychologists approach the study of perception is the research being done at the Center for Ecological Study of Perception and Action at the University of Connecticut (CESPA). One study at CESPA concerns ways in which individuals perceive their physical environment and how that influences their navigation through that environment.
Language
Psychologists have had an interest in the cognitive processes involved with language that dates back to the 1870s, when Carl Wernicke proposed a model for the mental processing of language. Current work on language within the field of cognitive psychology varies widely. Cognitive psychologists may study language acquisition, individual components of language formation (like phonemes), how language use is involved in mood, or numerous other related areas.
Significant work has focused on understanding the timing of language acquisition and how it can be used to determine if a child has, or is at risk of, developing a learning disability. A study from 2012 showed that, while this can be an effective strategy, it is important that those making evaluations include all relevant information when making their assessments. Factors such as individual variability, socioeconomic status, short-term and long-term memory capacity, and others must be included in order to make valid assessments.
Metacognition
Metacognition, in a broad sense, is the thoughts that a person has about their own thoughts. More specifically, metacognition includes things like:
How effective a person is at monitoring their own performance on a given task (self-regulation).
A person's understanding of their capabilities on particular mental tasks.
The ability to apply cognitive strategies.
Much of the current study regarding metacognition within the field of cognitive psychology deals with its application within the area of education. Being able to increase a student's metacognitive abilities has been shown to have a significant impact on their learning and study habits. One key aspect of this concept is the improvement of students' ability to set goals and self-regulate effectively to meet those goals. As a part of this process, it is also important to ensure that students are realistically evaluating their personal degree of knowledge and setting realistic goals (another metacognitive task).
Common phenomena related to metacognition include:
Déjà Vu: feeling of a repeated experience.
Cryptomnesia: generating thought believing it is unique but it is actually a memory of a past experience; also known as unconscious plagiarism.
False Fame Effect: non-famous names can be made to be famous.
Validity effect: statements seem more valid upon repeated exposure.
Imagination inflation: imagining an event that did not occur and having increased confidence that it did occur.
Modern perspectives
Modern perspectives on cognitive psychology generally address cognition as a dual process theory, expounded upon by Daniel Kahneman in 2011. Kahneman differentiated the two styles of processing more, calling them intuition and reasoning. Intuition (or system 1), similar to associative reasoning, was determined to be fast and automatic, usually with strong emotional bonds included in the reasoning process. Kahneman said that this kind of reasoning was based on formed habits and very difficult to change or manipulate. Reasoning (or system 2) was slower and much more volatile, being subject to conscious judgments and attitudes.
Applications
Abnormal psychology
Following the cognitive revolution, and as a result of many of the principal discoveries to come out of the field of cognitive psychology, the discipline of cognitive behavior therapy (CBT) evolved. Aaron T. Beck is generally regarded as the father of cognitive therapy, a particular type of CBT treatment. His work in the areas of recognition and treatment of depression has gained worldwide recognition. In his 1987 book titled Cognitive Therapy of Depression, Beck puts forth three salient points with regard to his reasoning for the treatment of depression by means of therapy or therapy and antidepressants versus using a pharmacological-only approach:
1. Despite the prevalent use of antidepressants, the fact remains that not all patients respond to them. Beck cites (in 1987) that only 60 to 65% of patients respond to antidepressants, and recent meta-analyses (a statistical breakdown of multiple studies) show very similar numbers.2. Many of those who do respond to antidepressants end up not taking their medications, for various reasons. They may develop side-effects or have some form of personal objection to taking the drugs.3. Beck posits that the use of psychotropic drugs may lead to an eventual breakdown in the individual's coping mechanisms. His theory is that the person essentially becomes reliant on the medication as a means of improving mood and fails to practice those coping techniques typically practiced by healthy individuals to alleviate the effects of depressive symptoms. By failing to do so, once the patient is weaned off of the antidepressants, they often are unable to cope with normal levels of depressed mood and feel driven to reinstate use of the antidepressants.
Social psychology
Many facets of modern social psychology have roots in research done within the field of cognitive psychology. Social cognition is a specific sub-set of social psychology that concentrates on processes that have been of particular focus within cognitive psychology, specifically applied to human interactions. Gordon B. Moskowitz defines social cognition as "... the study of the mental processes involved in perceiving, attending to, remembering, thinking about, and making sense of the people in our social world".
The development of multiple social information processing (SIP) models has been influential in studies involving aggressive and anti-social behavior. Kenneth Dodge's SIP model is one of, if not the most, empirically supported models relating to aggression. Among his research, Dodge posits that children who possess a greater ability to process social information more often display higher levels of socially acceptable behavior; that the type of social interaction that children have affects their relationships. His model asserts that there are five steps that an individual proceeds through when evaluating interactions with other individuals and that how the person interprets cues is key to their reactionary process.
Developmental psychology
Many of the prominent names in the field of developmental psychology base their understanding of development on cognitive models. One of the major paradigms of developmental psychology, the Theory of Mind (ToM), deals specifically with the ability of an individual to effectively understand and attribute cognition to those around them. This concept typically becomes fully apparent in children between the ages of 4 and 6. Essentially, before the child develops ToM, they are unable to understand that those around them can have different thoughts, ideas, or feelings than themselves. The development of ToM is a matter of metacognition, or thinking about one's thoughts. The child must be able to recognize that they have their own thoughts and in turn, that others possess thoughts of their own.
One of the foremost minds with regard to developmental psychology, Jean Piaget, focused much of his attention on cognitive development from birth through adulthood. Though there have been considerable challenges to parts of his stages of cognitive development, they remain a staple in the realm of education. Piaget's concepts and ideas predated the cognitive revolution but inspired a wealth of research in the field of cognitive psychology and many of his principles have been blended with modern theory to synthesize the predominant views of today.
Educational psychology
Modern theories of education have applied many concepts that are focal points of cognitive psychology. Some of the most prominent concepts include:
Metacognition: Metacognition is a broad concept encompassing all manners of one's thoughts and knowledge about their own thinking. A key area of educational focus in this realm is related to self-monitoring, which relates highly to how well students are able to evaluate their personal knowledge and apply strategies to improve knowledge in areas in which they are lacking.
Declarative knowledge and procedural knowledge: Declarative knowledge is a person's 'encyclopedic' knowledge base, whereas procedural knowledge is specific knowledge relating to performing particular tasks. The application of these cognitive paradigms to education attempts to augment a student's ability to integrate declarative knowledge into newly learned procedures in an effort to facilitate accelerated learning.
Knowledge organization: Applications of cognitive psychology's understanding of how knowledge is organized in the brain has been a major focus within the field of education in recent years. The hierarchical method of organizing information and how that maps well onto the brain's memory are concepts that have proven extremely beneficial in classrooms.
Personality psychology
Cognitive therapeutic approaches have received considerable attention in the treatment of personality disorders in recent years. The approach focuses on the formation of what it believes to be faulty schemata, centralized on judgmental biases and general cognitive errors.
Cognitive psychology vs. cognitive science
The line between cognitive psychology and cognitive science can be blurry. Cognitive psychology is better understood as predominantly concerned with applied psychology and the understanding of psychological phenomena. Cognitive psychologists are often heavily involved in running psychological experiments involving human participants, with the goal of gathering information related to how the human mind takes in, processes, and acts upon inputs received from the outside world. The information gained in this area is then often used in the applied field of clinical psychology.
Cognitive science is better understood as predominantly concerned with a much broader scope, with links to philosophy, linguistics, anthropology, neuroscience, and particularly with artificial intelligence. It could be said that cognitive science provides the corpus of information feeding the theories used by cognitive psychologists. Cognitive scientists' research sometimes involves non-human subjects, allowing them to delve into areas which would come under ethical scrutiny if performed on human participants. For instance, they may do research implanting devices in the brains of rats to track the firing of neurons while the rat performs a particular task. Cognitive science is highly involved in the area of artificial intelligence and its application to the understanding of mental processes.
Criticisms
Lack of cohesion
Some observers have suggested that as cognitive psychology became a movement during the 1970s, the intricacies of the phenomena and processes it examined meant it also began to lose cohesion as a field of study. In Psychology: Pythagoras to Present, for example, John Malone writes: "Examinations of late twentieth-century textbooks dealing with "cognitive psychology", "human cognition", "cognitive science" and the like quickly reveal that there are many, many varieties of cognitive psychology and very little agreement about exactly what may be its domain." This misfortune produced competing models that questioned information-processing approaches to cognitive functioning such as Decision Making and Behavioral Sciences.
Controversies
In the early years of cognitive psychology, behaviorist critics held that the empiricism it pursued was incompatible with the concept of internal mental states. However, cognitive neuroscience continues to gather evidence of direct correlations between physiological brain activity and mental states, endorsing the basis for cognitive psychology.
There is however disagreement between neuropsychologists and cognitive psychologists. Cognitive psychology has produced models of cognition which are not supported by modern brain science. It is often the case that the advocates of different cognitive models form a dialectic relationship with one another thus affecting empirical research, with researchers siding with their favorite theory. For example, advocates of mental model theory have attempted to find evidence that deductive reasoning is based on image thinking, while the advocates of mental logic theory have tried to prove that it is based on verbal thinking, leading to a disorderly picture of the findings from brain imaging and brain lesion studies. When theoretical claims are put aside, the evidence shows that interaction depends on the type of task tested, whether of visuospatial or linguistical orientation; but that there is also an aspect of reasoning which is not covered by either theory.
Similarly, neurolinguistics has found that it is easier to make sense of brain imaging studies when the theories are left aside. In the field of language cognition research, generative grammar has taken the position that language resides within its private cognitive module, while 'Cognitive Linguistics' goes to the opposite extreme by claiming that language is not an independent function, but operates on general cognitive capacities such as visual processing and motor skills. Consensus in neuropsychology however takes the middle position that, while language is a specialized function, it overlaps or interacts with visual processing. Nonetheless, much of the research in language cognition continues to be divided along the lines of generative grammar and Cognitive Linguistics; and this, again, affects adjacent research fields including language development and language acquisition.
Major research areas
Categorization
Induction and acquisition
Judgement and classification
Representation and structure
Similarity
Knowledge representation
Dual-coding theories
Media psychology
Mental imagery
Numerical cognition
Propositional encoding
Language
Language acquisition
Language processing
Memory
Aging and memory
Autobiographical memory
Childhood memory
Constructive memory
Emotion and memory
Episodic memory
Eyewitness memory
False memories
Flashbulb memory
List of memory biases
Long-term memory
Semantic memory
Short-term memory
Source-monitoring error
Spaced repetition
Working memory
Perception
Attention
Object recognition
Pattern recognition
Perception
Form perception
Psychophysics
Time sensation
Thinking
Choice (Glasser's theory)
Concept formation
Decision-making
Logic
Psychology of reasoning
Problem solving
Influential cognitive psychologists
John R. Anderson
Alan Baddeley
David Ausubel
Albert Bandura
Frederic Bartlett
Elizabeth Bates
Aaron T. Beck
Robert Bjork
Paul Bloom
Gordon H. Bower
Donald Broadbent
Jerome Bruner
Susan Carey
Noam Chomsky
Fergus Craik
Antonio Damasio
Hermann Ebbinghaus
Albert Ellis
K. Anders Ericsson
William Estes
Eugene Galanter
Vittorio Gallese
Michael Gazzaniga
Dedre Gentner
Vittorio Guidano
Philip Johnson-Laird
Daniel Kahneman
Nancy Kanwisher
Eric Lenneberg
Alan Leslie
Willem Levelt
Elizabeth Loftus
Alexander Luria
Brian MacWhinney
George Mandler
Jean Matter Mandler
Ellen Markman
James McClelland
George Armitage Miller
Ulrich Neisser
Allen Newell
Allan Paivio
Seymour Papert
Jean Piaget
Steven Pinker
Michael Posner
Karl H. Pribram
Giacomo Rizzolatti
Henry L. Roediger III
Eleanor Rosch
David Rumelhart
Eleanor Saffran
Daniel Schacter
Otto Selz
Roger Shepard
Richard Shiffrin
Herbert A. Simon
George Sperling
Robert Sternberg
Larry Squire
Saul Sternberg
Anne Treisman
Endel Tulving
Amos Tversky
Lev Vygotsky
See also
References
Further reading
Philip T. Quinlan, Ben Dyson. 2008. Cognitive Psychology. Publisher-Pearson/Prentice Hall. , 9780131298101
Robert J. Sternberg, Jeffery Scott Mio. 2009. Cognitive Psychology. Publisher-Cengage Learning. , 9780495506294
Nick Braisby, Angus Gellatly. 2012. Cognitive Psychology. Publisher-Oxford University Press. , 9780199236992
External links
Cognitive psychology article in Scholarpedia
Laboratory for Rational Decision Making
1967 introductions
Behavioural sciences
Cognition | Cognitive psychology | [
"Biology"
] | 5,498 | [
"Behavioural sciences",
"Behavior",
"Cognitive psychology"
] |
5,962 | https://en.wikipedia.org/wiki/Comet | A comet is an icy, small Solar System body that warms and begins to release gases when passing close to the Sun, a process called outgassing. This produces an extended, gravitationally unbound atmosphere or coma surrounding the nucleus, and sometimes a tail of gas and dust gas blown out from the coma. These phenomena are due to the effects of solar radiation and the outstreaming solar wind plasma acting upon the nucleus of the comet. Comet nuclei range from a few hundred meters to tens of kilometers across and are composed of loose collections of ice, dust, and small rocky particles. The coma may be up to 15 times Earth's diameter, while the tail may stretch beyond one astronomical unit. If sufficiently close and bright, a comet may be seen from Earth without the aid of a telescope and can subtend an arc of up to 30° (60 Moons) across the sky. Comets have been observed and recorded since ancient times by many cultures and religions.
Comets usually have highly eccentric elliptical orbits, and they have a wide range of orbital periods, ranging from several years to potentially several millions of years. Short-period comets originate in the Kuiper belt or its associated scattered disc, which lie beyond the orbit of Neptune. Long-period comets are thought to originate in the Oort cloud, a spherical cloud of icy bodies extending from outside the Kuiper belt to halfway to the nearest star. Long-period comets are set in motion towards the Sun by gravitational perturbations from passing stars and the galactic tide. Hyperbolic comets may pass once through the inner Solar System before being flung to interstellar space. The appearance of a comet is called an apparition.
Extinct comets that have passed close to the Sun many times have lost nearly all of their volatile ices and dust and may come to resemble small asteroids. Asteroids are thought to have a different origin from comets, having formed inside the orbit of Jupiter rather than in the outer Solar System. However, the discovery of main-belt comets and active centaur minor planets has blurred the distinction between asteroids and comets. In the early 21st century, the discovery of some minor bodies with long-period comet orbits, but characteristics of inner solar system asteroids, were called Manx comets. They are still classified as comets, such as C/2014 S3 (PANSTARRS). Twenty-seven Manx comets were found from 2013 to 2017.
, there are 4,584 known comets. However, this represents a very small fraction of the total potential comet population, as the reservoir of comet-like bodies in the outer Solar System (in the Oort cloud) is about one trillion. Roughly one comet per year is visible to the naked eye, though many of those are faint and unspectacular. Particularly bright examples are called "great comets". Comets have been visited by uncrewed probes such as NASA's Deep Impact, which blasted a crater on Comet Tempel 1 to study its interior, and the European Space Agency's Rosetta, which became the first to land a robotic spacecraft on a comet.
Etymology
The word comet derives from the Old English from the Latin or . That, in turn, is a romanization of the Greek 'wearing long hair', and the Oxford English Dictionary notes that the term () already meant 'long-haired star, comet' in Greek. was derived from () 'to wear the hair long', which was itself derived from () 'the hair of the head' and was used to mean 'the tail of a comet'.
The astronomical symbol for comets (represented in Unicode) is , consisting of a small disc with three hairlike extensions.
Physical characteristics
Nucleus
The solid, core structure of a comet is known as the nucleus. Cometary nuclei are composed of an amalgamation of rock, dust, water ice, and frozen carbon dioxide, carbon monoxide, methane, and ammonia. As such, they are popularly described as "dirty snowballs" after Fred Whipple's model. Comets with a higher dust content have been called "icy dirtballs". The term "icy dirtballs" arose after observation of Comet 9P/Tempel 1 collision with an "impactor" probe sent by NASA Deep Impact mission in July 2005. Research conducted in 2014 suggests that comets are like "deep fried ice cream", in that their surfaces are formed of dense crystalline ice mixed with organic compounds, while the interior ice is colder and less dense.
The surface of the nucleus is generally dry, dusty or rocky, suggesting that the ices are hidden beneath a surface crust several metres thick. The nuclei contains a variety of organic compounds, which may include methanol, hydrogen cyanide, formaldehyde, ethanol, ethane, and perhaps more complex molecules such as long-chain hydrocarbons and amino acids. In 2009, it was confirmed that the amino acid glycine had been found in the comet dust recovered by NASA's Stardust mission. In August 2011, a report, based on NASA studies of meteorites found on Earth, was published suggesting DNA and RNA components (adenine, guanine, and related organic molecules) may have been formed on asteroids and comets.
The outer surfaces of cometary nuclei have a very low albedo, making them among the least reflective objects found in the Solar System. The Giotto space probe found that the nucleus of Halley's Comet (1P/Halley) reflects about four percent of the light that falls on it, and Deep Space 1 discovered that Comet Borrelly's surface reflects less than 3.0%; by comparison, asphalt reflects seven percent. The dark surface material of the nucleus may consist of complex organic compounds. Solar heating drives off lighter volatile compounds, leaving behind larger organic compounds that tend to be very dark, like tar or crude oil. The low reflectivity of cometary surfaces causes them to absorb the heat that drives their outgassing processes.
Comet nuclei with radii of up to have been observed, but ascertaining their exact size is difficult. The nucleus of 322P/SOHO is probably only in diameter. A lack of smaller comets being detected despite the increased sensitivity of instruments has led some to suggest that there is a real lack of comets smaller than across. Known comets have been estimated to have an average density of . Because of their low mass, comet nuclei do not become spherical under their own gravity and therefore have irregular shapes.
Roughly six percent of the near-Earth asteroids are thought to be the extinct nuclei of comets that no longer experience outgassing, including 14827 Hypnos and 3552 Don Quixote.
Results from the Rosetta and Philae spacecraft show that the nucleus of 67P/Churyumov–Gerasimenko has no magnetic field, which suggests that magnetism may not have played a role in the early formation of planetesimals. Further, the ALICE spectrograph on Rosetta determined that electrons (within above the comet nucleus) produced from photoionization of water molecules by solar radiation, and not photons from the Sun as thought earlier, are responsible for the degradation of water and carbon dioxide molecules released from the comet nucleus into its coma. Instruments on the Philae lander found at least sixteen organic compounds at the comet's surface, four of which (acetamide, acetone, methyl isocyanate and propionaldehyde) have been detected for the first time on a comet.
Coma
The streams of dust and gas thus released form a huge and extremely thin atmosphere around the comet called the "coma". The force exerted on the coma by the Sun's radiation pressure and solar wind cause an enormous "tail" to form pointing away from the Sun.
The coma is generally made of water and dust, with water making up to 90% of the volatiles that outflow from the nucleus when the comet is within 3 to 4 astronomical units (450,000,000 to 600,000,000 km; 280,000,000 to 370,000,000 mi) of the Sun. The parent molecule is destroyed primarily through photodissociation and to a much smaller extent photoionization, with the solar wind playing a minor role in the destruction of water compared to photochemistry. Larger dust particles are left along the comet's orbital path whereas smaller particles are pushed away from the Sun into the comet's tail by light pressure.
Although the solid nucleus of comets is generally less than across, the coma may be thousands or millions of kilometers across, sometimes becoming larger than the Sun. For example, about a month after an outburst in October 2007, comet 17P/Holmes briefly had a tenuous dust atmosphere larger than the Sun. The Great Comet of 1811 had a coma roughly the diameter of the Sun. Even though the coma can become quite large, its size can decrease about the time it crosses the orbit of Mars around from the Sun. At this distance the solar wind becomes strong enough to blow the gas and dust away from the coma, and in doing so enlarging the tail. Ion tails have been observed to extend one astronomical unit (150 million km) or more.
Both the coma and tail are illuminated by the Sun and may become visible when a comet passes through the inner Solar System, the dust reflects sunlight directly while the gases glow from ionisation. Most comets are too faint to be visible without the aid of a telescope, but a few each decade become bright enough to be visible to the naked eye. Occasionally a comet may experience a huge and sudden outburst of gas and dust, during which the size of the coma greatly increases for a period of time. This happened in 2007 to Comet Holmes.
In 1996, comets were found to emit X-rays. This greatly surprised astronomers because X-ray emission is usually associated with very high-temperature bodies. The X-rays are generated by the interaction between comets and the solar wind: when highly charged solar wind ions fly through a cometary atmosphere, they collide with cometary atoms and molecules, "stealing" one or more electrons from the atom in a process called "charge exchange". This exchange or transfer of an electron to the solar wind ion is followed by its de-excitation into the ground state of the ion by the emission of X-rays and far ultraviolet photons.
Bow shock
Bow shocks form as a result of the interaction between the solar wind and the cometary ionosphere, which is created by the ionization of gases in the coma. As the comet approaches the Sun, increasing outgassing rates cause the coma to expand, and the sunlight ionizes gases in the coma. When the solar wind passes through this ion coma, the bow shock appears.
The first observations were made in the 1980s and 1990s as several spacecraft flew by comets 21P/Giacobini–Zinner, 1P/Halley, and 26P/Grigg–Skjellerup. It was then found that the bow shocks at comets are wider and more gradual than the sharp planetary bow shocks seen at, for example, Earth. These observations were all made near perihelion when the bow shocks already were fully developed.
The Rosetta spacecraft observed the bow shock at comet 67P/Churyumov–Gerasimenko at an early stage of bow shock development when the outgassing increased during the comet's journey toward the Sun. This young bow shock was called the "infant bow shock". The infant bow shock is asymmetric and, relative to the distance to the nucleus, wider than fully developed bow shocks.
Tails
In the outer Solar System, comets remain frozen and inactive and are extremely difficult or impossible to detect from Earth due to their small size. Statistical detections of inactive comet nuclei in the Kuiper belt have been reported from observations by the Hubble Space Telescope but these detections have been questioned. As a comet approaches the inner Solar System, solar radiation causes the volatile materials within the comet to vaporize and stream out of the nucleus, carrying dust away with them.
The streams of dust and gas each form their own distinct tail, pointing in slightly different directions. The tail of dust is left behind in the comet's orbit in such a manner that it often forms a curved tail called the type II or dust tail. At the same time, the ion or type I tail, made of gases, always points directly away from the Sun because this gas is more strongly affected by the solar wind than is dust, following magnetic field lines rather than an orbital trajectory. On occasions—such as when Earth passes through a comet's orbital plane, the antitail, pointing in the opposite direction to the ion and dust tails, may be seen.
The observation of antitails contributed significantly to the discovery of solar wind. The ion tail is formed as a result of the ionization by solar ultra-violet radiation of particles in the coma. Once the particles have been ionized, they attain a net positive electrical charge, which in turn gives rise to an "induced magnetosphere" around the comet. The comet and its induced magnetic field form an obstacle to outward flowing solar wind particles. Because the relative orbital speed of the comet and the solar wind is supersonic, a bow shock is formed upstream of the comet in the flow direction of the solar wind. In this bow shock, large concentrations of cometary ions (called "pick-up ions") congregate and act to "load" the solar magnetic field with plasma, such that the field lines "drape" around the comet forming the ion tail.
If the ion tail loading is sufficient, the magnetic field lines are squeezed together to the point where, at some distance along the ion tail, magnetic reconnection occurs. This leads to a "tail disconnection event". This has been observed on a number of occasions, one notable event being recorded on 20 April 2007, when the ion tail of Encke's Comet was completely severed while the comet passed through a coronal mass ejection. This event was observed by the STEREO space probe.
In 2013, ESA scientists reported that the ionosphere of the planet Venus streams outwards in a manner similar to the ion tail seen streaming from a comet under similar conditions."
Jets
Uneven heating can cause newly generated gases to break out of a weak spot on the surface of comet's nucleus, like a geyser. These streams of gas and dust can cause the nucleus to spin, and even split apart. In 2010 it was revealed dry ice (frozen carbon dioxide) can power jets of material flowing out of a comet nucleus. Infrared imaging of Hartley 2 shows such jets exiting and carrying with it dust grains into the coma.
Orbital characteristics
Most comets are small Solar System bodies with elongated elliptical orbits that take them close to the Sun for a part of their orbit and then out into the further reaches of the Solar System for the remainder. Comets are often classified according to the length of their orbital periods: The longer the period the more elongated the ellipse.
Short period
Periodic comets or short-period comets are generally defined as those having orbital periods of less than 200 years. They usually orbit more-or-less in the ecliptic plane in the same direction as the planets. Their orbits typically take them out to the region of the outer planets (Jupiter and beyond) at aphelion; for example, the aphelion of Halley's Comet is a little beyond the orbit of Neptune. Comets whose aphelia are near a major planet's orbit are called its "family". Such families are thought to arise from the planet capturing formerly long-period comets into shorter orbits.
At the shorter orbital period extreme, Encke's Comet has an orbit that does not reach the orbit of Jupiter, and is known as an Encke-type comet. Short-period comets with orbital periods less than 20 years and low inclinations (up to 30 degrees) to the ecliptic are called traditional Jupiter-family comets (JFCs). Those like Halley, with orbital periods of between 20 and 200 years and inclinations extending from zero to more than 90 degrees, are called Halley-type comets (HTCs). , 70 Encke-type comets, 100 HTCs, and 755 JFCs have been reported.
Recently discovered main-belt comets form a distinct class, orbiting in more circular orbits within the asteroid belt.
Because their elliptical orbits frequently take them close to the giant planets, comets are subject to further gravitational perturbations. Short-period comets have a tendency for their aphelia to coincide with a giant planet's semi-major axis, with the JFCs being the largest group. It is clear that comets coming in from the Oort cloud often have their orbits strongly influenced by the gravity of giant planets as a result of a close encounter. Jupiter is the source of the greatest perturbations, being more than twice as massive as all the other planets combined. These perturbations can deflect long-period comets into shorter orbital periods.
Based on their orbital characteristics, short-period comets are thought to originate from the centaurs and the Kuiper belt/scattered disc —a disk of objects in the trans-Neptunian region—whereas the source of long-period comets is thought to be the far more distant spherical Oort cloud (after the Dutch astronomer Jan Hendrik Oort who hypothesized its existence). Vast swarms of comet-like bodies are thought to orbit the Sun in these distant regions in roughly circular orbits. Occasionally the gravitational influence of the outer planets (in the case of Kuiper belt objects) or nearby stars (in the case of Oort cloud objects) may throw one of these bodies into an elliptical orbit that takes it inwards toward the Sun to form a visible comet. Unlike the return of periodic comets, whose orbits have been established by previous observations, the appearance of new comets by this mechanism is unpredictable. When flung into the orbit of the sun, and being continuously dragged towards it, tons of matter are stripped from the comets which greatly influence their lifetime; the more stripped, the shorter they live and vice versa.
Long period
Long-period comets have highly eccentric orbits and periods ranging from 200 years to thousands or even millions of years. An eccentricity greater than 1 when near perihelion does not necessarily mean that a comet will leave the Solar System. For example, Comet McNaught had a heliocentric osculating eccentricity of 1.000019 near its perihelion passage epoch in January 2007 but is bound to the Sun with roughly a 92,600-year orbit because the eccentricity drops below 1 as it moves farther from the Sun. The future orbit of a long-period comet is properly obtained when the osculating orbit is computed at an epoch after leaving the planetary region and is calculated with respect to the center of mass of the Solar System. By definition long-period comets remain gravitationally bound to the Sun; those comets that are ejected from the Solar System due to close passes by major planets are no longer properly considered as having "periods". The orbits of long-period comets take them far beyond the outer planets at aphelia, and the plane of their orbits need not lie near the ecliptic. Long-period comets such as C/1999 F1 and C/2017 T2 (PANSTARRS) can have aphelion distances of nearly with orbital periods estimated around 6 million years.
Single-apparition or non-periodic comets are similar to long-period comets because they have parabolic or slightly hyperbolic trajectories when near perihelion in the inner Solar System. However, gravitational perturbations from giant planets cause their orbits to change. Single-apparition comets have a hyperbolic or parabolic osculating orbit which allows them to permanently exit the Solar System after a single pass of the Sun. The Sun's Hill sphere has an unstable maximum boundary of . Only a few hundred comets have been seen to reach a hyperbolic orbit (e > 1) when near perihelion that using a heliocentric unperturbed two-body best-fit suggests they may escape the Solar System.
, only two objects have been discovered with an eccentricity significantly greater than one: 1I/ʻOumuamua and 2I/Borisov, indicating an origin outside the Solar System. While ʻOumuamua, with an eccentricity of about 1.2, showed no optical signs of cometary activity during its passage through the inner Solar System in October 2017, changes to its trajectory—which suggests outgassing—indicate that it is probably a comet. On the other hand, 2I/Borisov, with an estimated eccentricity of about 3.36, has been observed to have the coma feature of comets, and is considered the first detected interstellar comet. Comet C/1980 E1 had an orbital period of roughly 7.1 million years before the 1982 perihelion passage, but a 1980 encounter with Jupiter accelerated the comet giving it the largest eccentricity (1.057) of any known solar comet with a reasonable observation arc. Comets not expected to return to the inner Solar System include C/1980 E1, C/2000 U5, C/2001 Q4 (NEAT), C/2009 R1, C/1956 R1, and C/2007 F1 (LONEOS).
Some authorities use the term "periodic comet" to refer to any comet with a periodic orbit (that is, all short-period comets plus all long-period comets), whereas others use it to mean exclusively short-period comets. Similarly, although the literal meaning of "non-periodic comet" is the same as "single-apparition comet", some use it to mean all comets that are not "periodic" in the second sense (that is, to include all comets with a period greater than 200 years).
Early observations have revealed a few genuinely hyperbolic (i.e. non-periodic) trajectories, but no more than could be accounted for by perturbations from Jupiter. Comets from interstellar space are moving with velocities of the same order as the relative velocities of stars near the Sun (a few tens of km per second). When such objects enter the Solar System, they have a positive specific orbital energy resulting in a positive velocity at infinity () and have notably hyperbolic trajectories. A rough calculation shows that there might be four hyperbolic comets per century within Jupiter's orbit, give or take one and perhaps two orders of magnitude.
Oort cloud and Hills cloud
The Oort cloud is thought to occupy a vast space starting from between to as far as from the Sun. This cloud encases the celestial bodies that start at the middle of the Solar System—the Sun, all the way to outer limits of the Kuiper Belt. The Oort cloud consists of viable materials necessary for the creation of celestial bodies. The Solar System's planets exist only because of the planetesimals (chunks of leftover space that assisted in the creation of planets) that were condensed and formed by the gravity of the Sun. The eccentric made from these trapped planetesimals is why the Oort Cloud even exists. Some estimates place the outer edge at between . The region can be subdivided into a spherical outer Oort cloud of , and a doughnut-shaped inner cloud, the Hills cloud, of . The outer cloud is only weakly bound to the Sun and supplies the long-period (and possibly Halley-type) comets that fall to inside the orbit of Neptune. The inner Oort cloud is also known as the Hills cloud, named after Jack G. Hills, who proposed its existence in 1981. Models predict that the inner cloud should have tens or hundreds of times as many cometary nuclei as the outer halo; it is seen as a possible source of new comets that resupply the relatively tenuous outer cloud as the latter's numbers are gradually depleted. The Hills cloud explains the continued existence of the Oort cloud after billions of years.
Exocomets
Exocomets beyond the Solar System have been detected and may be common in the Milky Way. The first exocomet system detected was around Beta Pictoris, a very young A-type main-sequence star, in 1987. A total of 11 such exocomet systems have been identified , using the absorption spectrum caused by the large clouds of gas emitted by comets when passing close to their star. For ten years the Kepler space telescope was responsible for searching for planets and other forms outside of the solar system. The first transiting exocomets were found in February 2018 by a group consisting of professional astronomers and citizen scientists in light curves recorded by the Kepler Space Telescope. After Kepler Space Telescope retired in October 2018, a new telescope called TESS Telescope has taken over Kepler's mission. Since the launch of TESS, astronomers have discovered the transits of comets around the star Beta Pictoris using a light curve from TESS. Since TESS has taken over, astronomers have since been able to better distinguish exocomets with the spectroscopic method. New planets are detected by the white light curve method which is viewed as a symmetrical dip in the charts readings when a planet overshadows its parent star. However, after further evaluation of these light curves, it has been discovered that the asymmetrical patterns of the dips presented are caused by the tail of a comet or of hundreds of comets.
Effects of comets
Connection to meteor showers
As a comet is heated during close passes to the Sun, outgassing of its icy components releases solid debris too large to be swept away by radiation pressure and the solar wind. If Earth's orbit sends it through that trail of debris, which is composed mostly of fine grains of rocky material, there is likely to be a meteor shower as Earth passes through. Denser trails of debris produce quick but intense meteor showers and less dense trails create longer but less intense showers. Typically, the density of the debris trail is related to how long ago the parent comet released the material. The Perseid meteor shower, for example, occurs every year between 9 and 13 August, when Earth passes through the orbit of Comet Swift–Tuttle. Halley's Comet is the source of the Orionid shower in October.
Comets and impact on life
Many comets and asteroids collided with Earth in its early stages. Many scientists think that comets bombarding the young Earth about 4 billion years ago brought the vast quantities of water that now fill Earth's oceans, or at least a significant portion of it. Others have cast doubt on this idea. The detection of organic molecules, including polycyclic aromatic hydrocarbons, in significant quantities in comets has led to speculation that comets or meteorites may have brought the precursors of life—or even life itself—to Earth. In 2013 it was suggested that impacts between rocky and icy surfaces, such as comets, had the potential to create the amino acids that make up proteins through shock synthesis. The speed at which the comets entered the atmosphere, combined with the magnitude of energy created after initial contact, allowed smaller molecules to condense into the larger macro-molecules that served as the foundation for life. In 2015, scientists found significant amounts of molecular oxygen in the outgassings of comet 67P, suggesting that the molecule may occur more often than had been thought, and thus less an indicator of life as has been supposed.
It is suspected that comet impacts have, over long timescales, delivered significant quantities of water to Earth's Moon, some of which may have survived as lunar ice. Comet and meteoroid impacts are thought to be responsible for the existence of tektites and australites.
Fear of comets
Fear of comets as acts of God and signs of impending doom was highest in Europe from AD 1200 to 1650. The year after the Great Comet of 1618, for example, Gotthard Arthusius published a pamphlet stating that it was a sign that the Day of Judgment was near. He listed ten pages of comet-related disasters, including "earthquakes, floods, changes in river courses, hail storms, hot and dry weather, poor harvests, epidemics, war and treason and high prices".
By 1700 most scholars concluded that such events occurred whether a comet was seen or not. Using Edmond Halley's records of comet sightings, however, William Whiston in 1711 wrote that the Great Comet of 1680 had a periodicity of 574 years and was responsible for the worldwide flood in the Book of Genesis, by pouring water on Earth. His announcement revived for another century fear of comets, now as direct threats to the world instead of signs of disasters. Spectroscopic analysis in 1910 found the toxic gas cyanogen in the tail of Halley's Comet, causing panicked buying of gas masks and quack "anti-comet pills" and "anti-comet umbrellas" by the public.
Fate of comets
Departure (ejection) from Solar System
If a comet is traveling fast enough, it may leave the Solar System. Such comets follow the open path of a hyperbola, and as such, they are called hyperbolic comets. Solar comets are only known to be ejected by interacting with another object in the Solar System, such as Jupiter. An example of this is Comet C/1980 E1, which was shifted from an orbit of 7.1 million years around the Sun, to a hyperbolic trajectory, after a 1980 close pass by the planet Jupiter. Interstellar comets such as 1I/ʻOumuamua and 2I/Borisov never orbited the Sun and therefore do not require a 3rd-body interaction to be ejected from the Solar System.
Extinction
Jupiter-family comets and long-period comets appear to follow very different fading laws. The JFCs are active over a lifetime of about 10,000 years or ~1,000 orbits whereas long-period comets fade much faster. Only 10% of the long-period comets survive more than 50 passages to small perihelion and only 1% of them survive more than 2,000 passages. Eventually most of the volatile material contained in a comet nucleus evaporates, and the comet becomes a small, dark, inert lump of rock or rubble that can resemble an asteroid. Some asteroids in elliptical orbits are now identified as extinct comets. Roughly six percent of the near-Earth asteroids are thought to be extinct comet nuclei.
Breakup and collisions
The nucleus of some comets may be fragile, a conclusion supported by the observation of comets splitting apart. A significant cometary disruption was that of Comet Shoemaker–Levy 9, which was discovered in 1993. A close encounter in July 1992 had broken it into pieces, and over a period of six days in July 1994, these pieces fell into Jupiter's atmosphere—the first time astronomers had observed a collision between two objects in the Solar System. Other splitting comets include 3D/Biela in 1846 and 73P/Schwassmann–Wachmann from 1995 to 2006. Greek historian Ephorus reported that a comet split apart as far back as the winter of 372–373 BC. Comets are suspected of splitting due to thermal stress, internal gas pressure, or impact.
Comets 42P/Neujmin and 53P/Van Biesbroeck appear to be fragments of a parent comet. Numerical integrations have shown that both comets had a rather close approach to Jupiter in January 1850, and that, before 1850, the two orbits were nearly identical. Another group of comets that is the result of fragmentation episodes is the Liller comet family made of C/1988 A1 (Liller), C/1996 Q1 (Tabur), C/2015 F3 (SWAN), C/2019 Y1 (ATLAS), and C/2023 V5 (Leonard).
Some comets have been observed to break up during their perihelion passage, including great comets West and Ikeya–Seki. Biela's Comet was one significant example when it broke into two pieces during its passage through the perihelion in 1846. These two comets were seen separately in 1852, but never again afterward. Instead, spectacular meteor showers were seen in 1872 and 1885 when the comet should have been visible. A minor meteor shower, the Andromedids, occurs annually in November, and it is caused when Earth crosses the orbit of Biela's Comet.
Some comets meet a more spectacular end – either falling into the Sun or colliding with a planet or other body. Collisions between comets and planets or moons were common in the early Solar System: some of the many craters on the Moon, for example, may have been caused by comets. A recent collision of a comet with a planet occurred in July 1994 when Comet Shoemaker–Levy 9 broke up into pieces and collided with Jupiter.
Nomenclature
The names given to comets have followed several different conventions over the past two centuries. Prior to the early 20th century, most comets were referred to by the year when they appeared, sometimes with additional adjectives for particularly bright comets; thus, the "Great Comet of 1680", the "Great Comet of 1882", and the "Great January Comet of 1910".
After Edmond Halley demonstrated that the comets of 1531, 1607, and 1682 were the same body and successfully predicted its return in 1759 by calculating its orbit, that comet became known as Halley's Comet. Similarly, the second and third known periodic comets, Encke's Comet and Biela's Comet, were named after the astronomers who calculated their orbits rather than their original discoverers. Later, periodic comets were usually named after their discoverers, but comets that had appeared only once continued to be referred to by the year of their appearance.
In the early 20th century, the convention of naming comets after their discoverers became common, and this remains so today. A comet can be named after its discoverers or an instrument or program that helped to find it. For example, in 2019, astronomer Gennadiy Borisov observed a comet that appeared to have originated outside of the solar system; the comet was named 2I/Borisov after him.
History of study
Early observations and thought
From ancient sources, such as Chinese oracle bones, it is known that comets have been noticed by humans for millennia. Until the sixteenth century, comets were usually considered bad omens of deaths of kings or noble men, or coming catastrophes, or even interpreted as attacks by heavenly beings against terrestrial inhabitants.
Aristotle (384–322 BC) was the first known scientist to use various theories and observational facts to employ a consistent, structured cosmological theory of comets. He believed that comets were atmospheric phenomena, due to the fact that they could appear outside of the zodiac and vary in brightness over the course of a few days. Aristotle's cometary theory arose from his observations and cosmological theory that everything in the cosmos is arranged in a distinct configuration. Part of this configuration was a clear separation between the celestial and terrestrial, believing comets to be strictly associated with the latter. According to Aristotle, comets must be within the sphere of the moon and clearly separated from the heavens. Also in the 4th century BC, Apollonius of Myndus supported the idea that comets moved like the planets. Aristotelian theory on comets continued to be widely accepted throughout the Middle Ages, despite several discoveries from various individuals challenging aspects of it.
In the 1st century AD, Seneca the Younger questioned Aristotle's logic concerning comets. Because of their regular movement and imperviousness to wind, they cannot be atmospheric, and are more permanent than suggested by their brief flashes across the sky. He pointed out that only the tails are transparent and thus cloudlike, and argued that there is no reason to confine their orbits to the zodiac. In criticizing Apollonius of Myndus, Seneca argues, "A comet cuts through the upper regions of the universe and then finally becomes visible when it reaches the lowest point of its orbit." While Seneca did not author a substantial theory of his own, his arguments would spark much debate among Aristotle's critics in the 16th and 17th centuries.
In the 1st century AD, Pliny the Elder believed that comets were connected with political unrest and death. Pliny observed comets as "human like", often describing their tails with "long hair" or "long beard". His system for classifying comets according to their color and shape was used for centuries.
In India, by the 6th century AD astronomers believed that comets were apparitions that re-appeared periodically. This was the view expressed in the 6th century by the astronomers Varāhamihira and Bhadrabahu, and the 10th-century astronomer Bhaṭṭotpala listed the names and estimated periods of certain comets, but it is not known how these figures were calculated or how accurate they were.
There is a claim that an Arab scholar in 1258 noted several recurrent appearances of a comet (or a type of comet), and though it's not clear if he considered it to be a single periodic comet, it might have been a comet with a period of around 63 years.
In 1301, the Italian painter Giotto was the first person to accurately and anatomically portray a comet. In his work Adoration of the Magi, Giotto's depiction of Halley's Comet in the place of the Star of Bethlehem would go unmatched in accuracy until the 19th century and be bested only with the invention of photography.
Astrological interpretations of comets proceeded to take precedence clear into the 15th century, despite the presence of modern scientific astronomy beginning to take root. Comets continued to forewarn of disaster, as seen in the Luzerner Schilling chronicles and in the warnings of Pope Callixtus III. In 1578, German Lutheran bishop Andreas Celichius defined comets as "the thick smoke of human sins ... kindled by the hot and fiery anger of the Supreme Heavenly Judge". The next year, Andreas Dudith stated that "If comets were caused by the sins of mortals, they would never be absent from the sky."
Scientific approach
Crude attempts at a parallax measurement of Halley's Comet were made in 1456, but were erroneous. Regiomontanus was the first to attempt to calculate diurnal parallax by observing the Great Comet of 1472. His predictions were not very accurate, but they were conducted in the hopes of estimating the distance of a comet from Earth.
In the 16th century, Tycho Brahe and Michael Maestlin demonstrated that comets must exist outside of Earth's atmosphere by measuring the parallax of the Great Comet of 1577. Within the precision of the measurements, this implied the comet must be at least four times more distant than from Earth to the Moon. Based on observations in 1664, Giovanni Borelli recorded the longitudes and latitudes of comets that he observed, and suggested that cometary orbits may be parabolic. Despite being a skilled astronomer, in his 1623 book The Assayer, Galileo Galilei rejected Brahe's theories on the parallax of comets and claimed that they may be a mere optical illusion, despite little personal observation. In 1625, Maestlin's student Johannes Kepler upheld that Brahe's view of cometary parallax was correct. Additionally, mathematician Jacob Bernoulli published a treatise on comets in 1682.
During the early modern period comets were studied for their astrological significance in medical disciplines. Many healers of this time considered medicine and astronomy to be inter-disciplinary and employed their knowledge of comets and other astrological signs for diagnosing and treating patients.
Isaac Newton, in his Principia Mathematica of 1687, proved that an object moving under the influence of gravity by an inverse square law must trace out an orbit shaped like one of the conic sections, and he demonstrated how to fit a comet's path through the sky to a parabolic orbit, using the comet of 1680 as an example.
He describes comets as compact and durable solid bodies moving in oblique orbit and their tails as thin streams of vapor emitted by their nuclei, ignited or heated by the Sun. He suspected that comets were the origin of the life-supporting component of air. He pointed out that comets usually appear near the Sun, and therefore most likely orbit it. On their luminosity, he stated, "The comets shine by the Sun's light, which they reflect," with their tails illuminated by "the Sun's light reflected by a smoke arising from [the coma]".
In 1705, Edmond Halley (1656–1742) applied Newton's method to 23 cometary apparitions that had occurred between 1337 and 1698. He noted that three of these, the comets of 1531, 1607, and 1682, had very similar orbital elements, and he was further able to account for the slight differences in their orbits in terms of gravitational perturbation caused by Jupiter and Saturn. Confident that these three apparitions had been three appearances of the same comet, he predicted that it would appear again in 1758–59. Halley's predicted return date was later refined by a team of three French mathematicians: Alexis Clairaut, Joseph Lalande, and Nicole-Reine Lepaute, who predicted the date of the comet's 1759 perihelion to within one month's accuracy. When the comet returned as predicted, it became known as Halley's Comet.
As early as the 18th century, some scientists had made correct hypotheses as to comets' physical composition. In 1755, Immanuel Kant hypothesized in his Universal Natural History that comets were condensed from "primitive matter" beyond the known planets, which is "feebly moved" by gravity, then orbit at arbitrary inclinations, and are partially vaporized by the Sun's heat as they near perihelion. In 1836, the German mathematician Friedrich Wilhelm Bessel, after observing streams of vapor during the appearance of Halley's Comet in 1835, proposed that the jet forces of evaporating material could be great enough to significantly alter a comet's orbit, and he argued that the non-gravitational movements of Encke's Comet resulted from this phenomenon.
In the 19th century, the Astronomical Observatory of Padova was an epicenter in the observational study of comets. Led by Giovanni Santini (1787–1877) and followed by Giuseppe Lorenzoni (1843–1914), this observatory was devoted to classical astronomy, mainly to the new comets and planets orbit calculation, with the goal of compiling a catalog of almost ten thousand stars. Situated in the Northern portion of Italy, observations from this observatory were key in establishing important geodetic, geographic, and astronomical calculations, such as the difference of longitude between Milan and Padua as well as Padua to Fiume. Correspondence within the observatory, particularly between Santini and another astronomer Giuseppe Toaldo, mentioned the importance of comet and planetary orbital observations.
In 1950, Fred Lawrence Whipple proposed that rather than being rocky objects containing some ice, comets were icy objects containing some dust and rock. This "dirty snowball" model soon became accepted and appeared to be supported by the observations of an armada of spacecraft (including the European Space Agency's Giotto probe and the Soviet Union's Vega 1 and Vega 2) that flew through the coma of Halley's Comet in 1986, photographed the nucleus, and observed jets of evaporating material.
On 22 January 2014, ESA scientists reported the detection, for the first definitive time, of water vapor on the dwarf planet Ceres, the largest object in the asteroid belt. The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes". According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids." On 11 August 2014, astronomers released studies, using the Atacama Large Millimeter/Submillimeter Array (ALMA) for the first time, that detailed the distribution of HCN, HNC, , and dust inside the comae of comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON).
Spacecraft missions
The Halley Armada describes the collection of spacecraft missions that visited and/or made observations of Halley's Comet 1980s perihelion. The space shuttle Challenger was intended to do a study of Halley's Comet in 1986, but exploded shortly after being launched.
Deep Impact. Debate continues about how much ice is in a comet. In 2001, the Deep Space 1 spacecraft obtained high-resolution images of the surface of Comet Borrelly. It was found that the surface of comet Borrelly is hot and dry, with a temperature of between , and extremely dark, suggesting that the ice has been removed by solar heating and maturation, or is hidden by the soot-like material that covers Borrelly. In July 2005, the Deep Impact probe blasted a crater on Comet Tempel 1 to study its interior. The mission yielded results suggesting that the majority of a comet's water ice is below the surface and that these reservoirs feed the jets of vaporized water that form the coma of Tempel 1. Renamed EPOXI, it made a flyby of Comet Hartley 2 on 4 November 2010.
Ulysses. In 2007, the Ulysses probe unexpectedly passed through the tail of the comet C/2006 P1 (McNaught) which was discovered in 2006. Ulysses was launched in 1990 and the intended mission was for Ulysses to orbit around the Sun for further study at all latitudes.
Stardust. Data from the Stardust mission show that materials retrieved from the tail of Wild 2 were crystalline and could only have been "born in fire", at extremely high temperatures of over . Although comets formed in the outer Solar System, radial mixing of material during the early formation of the Solar System is thought to have redistributed material throughout the proto-planetary disk. As a result, comets contain crystalline grains that formed in the early, hot inner Solar System. This is seen in comet spectra as well as in sample return missions. More recent still, the materials retrieved demonstrate that the "comet dust resembles asteroid materials". These new results have forced scientists to rethink the nature of comets and their distinction from asteroids.
Rosetta. The Rosetta probe orbited Comet Churyumov–Gerasimenko. On 12 November 2014, its lander Philae successfully landed on the comet's surface, the first time a spacecraft has ever landed on such an object in history.
Classification
Great comets
Approximately once a decade, a comet becomes bright enough to be noticed by a casual observer, leading such comets to be designated as great comets. Predicting whether a comet will become a great comet is notoriously difficult, as many factors may cause a comet's brightness to depart drastically from predictions. Broadly speaking, if a comet has a large and active nucleus, will pass close to the Sun, and is not obscured by the Sun as seen from Earth when at its brightest, it has a chance of becoming a great comet. However, Comet Kohoutek in 1973 fulfilled all the criteria and was expected to become spectacular but failed to do so. Comet West, which appeared three years later, had much lower expectations but became an extremely impressive comet.
The Great Comet of 1577 is a well-known example of a great comet. It passed near Earth as a non-periodic comet and was seen by many, including well-known astronomers Tycho Brahe and Taqi ad-Din. Observations of this comet led to several significant findings regarding cometary science, especially for Brahe.
The late 20th century saw a lengthy gap without the appearance of any great comets, followed by the arrival of two in quick succession—Comet Hyakutake in 1996, followed by Hale–Bopp, which reached maximum brightness in 1997 having been discovered two years earlier. The first great comet of the 21st century was C/2006 P1 (McNaught), which became visible to naked eye observers in January 2007. It was the brightest in over 40 years.
Sungrazing comets
A sungrazing comet is a comet that passes extremely close to the Sun at perihelion, generally within a few million kilometers. Although small sungrazers can be completely evaporated during such a close approach to the Sun, larger sungrazers can survive many perihelion passages. However, the strong tidal forces they experience often lead to their fragmentation.
About 90% of the sungrazers observed with SOHO are members of the Kreutz group, which all originate from one giant comet that broke up into many smaller comets during its first passage through the inner Solar System. The remainder contains some sporadic sungrazers, but four other related groups of comets have been identified among them: the Kracht, Kracht 2a, Marsden, and Meyer groups. The Marsden and Kracht groups both appear to be related to Comet 96P/Machholz, which is the parent of two meteor streams, the Quadrantids and the Arietids.
Unusual comets
Of the thousands of known comets, some exhibit unusual properties. Comet Encke (2P/Encke) orbits from outside the asteroid belt to just inside the orbit of the planet Mercury whereas the Comet 29P/Schwassmann–Wachmann currently travels in a nearly circular orbit entirely between the orbits of Jupiter and Saturn. 2060 Chiron, whose unstable orbit is between Saturn and Uranus, was originally classified as an asteroid until a faint coma was noticed. Similarly, Comet Shoemaker–Levy 2 was originally designated asteroid .
Largest
The largest known periodic comet is 95P/Chiron at 200 km in diameter that comes to perihelion every 50 years just inside of Saturn's orbit at 8 AU. The largest known Oort cloud comet is suspected of being Comet Bernardinelli-Bernstein at ≈150 km that will not come to perihelion until January 2031 just outside of Saturn's orbit at 11 AU. The Comet of 1729 is estimated to have been ≈100 km in diameter and came to perihelion inside of Jupiter's orbit at 4 AU.
Centaurs
Centaurs typically behave with characteristics of both asteroids and comets. Centaurs can be classified as comets such as 60558 Echeclus, and 166P/NEAT. 166P/NEAT was discovered while it exhibited a coma, and so is classified as a comet despite its orbit, and 60558 Echeclus was discovered without a coma but later became active, and was then classified as both a comet and an asteroid (174P/Echeclus). One plan for Cassini involved sending it to a centaur, but NASA decided to destroy it instead.
Observation
A comet may be discovered photographically using a wide-field telescope or visually with binoculars. However, even without access to optical equipment, it is still possible for the amateur astronomer to discover a sungrazing comet online by downloading images accumulated by some satellite observatories such as SOHO. SOHO's 2000th comet was discovered by Polish amateur astronomer Michał Kusiak on 26 December 2010 and both discoverers of Hale–Bopp used amateur equipment (although Hale was not an amateur).
Lost
A number of periodic comets discovered in earlier decades or previous centuries are now lost comets. Their orbits were never known well enough to predict future appearances or the comets have disintegrated. However, occasionally a "new" comet is discovered, and calculation of its orbit shows it to be an old "lost" comet. An example is Comet 11P/Tempel–Swift–LINEAR, discovered in 1869 but unobservable after 1908 because of perturbations by Jupiter. It was not found again until accidentally rediscovered by LINEAR in 2001. There are at least 18 comets that fit this category.
In popular culture
The depiction of comets in popular culture is firmly rooted in the long Western tradition of seeing comets as harbingers of doom and as omens of world-altering change. Halley's Comet alone has caused a slew of sensationalist publications of all sorts at each of its reappearances. It was especially noted that the birth and death of some notable persons coincided with separate appearances of the comet, such as with writers Mark Twain (who correctly speculated that he'd "go out with the comet" in 1910) and Eudora Welty, to whose life Mary Chapin Carpenter dedicated the song "Halley Came to Jackson".
In times past, bright comets often inspired panic and hysteria in the general population, being thought of as bad omens. More recently, during the passage of Halley's Comet in 1910, Earth passed through the comet's tail, and erroneous newspaper reports inspired a fear that cyanogen in the tail might poison millions, whereas the appearance of Comet Hale–Bopp in 1997 triggered the mass suicide of the Heaven's Gate cult.
In science fiction, the impact of comets has been depicted as a threat overcome by technology and heroism (as in the 1998 films Deep Impact and Armageddon), or as a trigger of global apocalypse (Lucifer's Hammer, 1979) or zombies (Night of the Comet, 1984). In Jules Verne's Off on a Comet a group of people are stranded on a comet orbiting the Sun, while a large crewed space expedition visits Halley's Comet in Sir Arthur C. Clarke's novel 2061: Odyssey Three.
In literature
The long-period comet first recorded by Pons in Florence on 15 July 1825 inspired Lydia Sigourney's humorous poem in which all the celestial bodies argue over the comet's appearance and purpose.
Gallery
Videos
See also
The Big Splash
Comet vintages
List of impact craters on Earth
List of possible impact structures on Earth
Lists of comets
References
Footnotes
Citations
Bibliography
Further reading
External links
Comets at NASA's Solar System Exploration
International Comet Quarterly by Harvard University
Catalogue of the Solar System Small Bodies Orbital Evolution
Science Demos: Make a Comet by the National High Magnetic Field Laboratory
Comets: from myths to reality, exhibition on Paris Observatory digital library
Articles containing video clips
Ice
Extraterrestrial water
Concepts in astronomy
Solar System | Comet | [
"Physics",
"Astronomy"
] | 11,134 | [
"Concepts in astronomy",
"Outer space",
"Solar System"
] |
5,966 | https://en.wikipedia.org/wiki/Compost | Compost is a mixture of ingredients used as plant fertilizer and to improve soil's physical, chemical, and biological properties. It is commonly prepared by decomposing plant and food waste, recycling organic materials, and manure. The resulting mixture is rich in plant nutrients and beneficial organisms, such as bacteria, protozoa, nematodes, and fungi. Compost improves soil fertility in gardens, landscaping, horticulture, urban agriculture, and organic farming, reducing dependency on commercial chemical fertilizers. The benefits of compost include providing nutrients to crops as fertilizer, acting as a soil conditioner, increasing the humus or humic acid contents of the soil, and introducing beneficial microbes that help to suppress pathogens in the soil and reduce soil-borne diseases.
At the simplest level, composting requires gathering a mix of green waste (nitrogen-rich materials such as leaves, grass, and food scraps) and brown waste (woody materials rich in carbon, such as stalks, paper, and wood chips). The materials break down into humus in a process taking months. Composting can be a multistep, closely monitored process with measured inputs of water, air, and carbon- and nitrogen-rich materials. The decomposition process is aided by shredding the plant matter, adding water, and ensuring proper aeration by regularly turning the mixture in a process using open piles or windrows. Fungi, earthworms, and other detritivores further break up the organic material. Aerobic bacteria and fungi manage the chemical process by converting the inputs into heat, carbon dioxide, and ammonium ions.
Composting is an important part of waste management, since food and other compostable materials make up about 20% of waste in landfills, and due to anaerobic conditions, these materials take longer to biodegrade in the landfill. Composting offers an environmentally superior alternative to using organic material for landfill because composting reduces methane emissions due to anaerobic conditions, and provides economic and environmental co-benefits. For example, compost can also be used for land and stream reclamation, wetland construction, and landfill cover.
Fundamentals
Composting is an aerobic method of decomposing organic solid wastes, so it can be used to recycle organic material. The process involves decomposing organic material into a humus-like material, known as compost, which is a good fertilizer for plants.
Composting organisms require four equally important ingredients to work effectively:
Carbon is needed for energy; the microbial oxidation of carbon produces the heat required for other parts of the composting process. High carbon materials tend to be brown and dry.
Nitrogen is needed to grow and reproduce more organisms to oxidize the carbon. High nitrogen materials tend to be green and wet. They can also include colourful fruits and vegetables.
Oxygen is required for oxidizing the carbon, the decomposition process. Aerobic bacteria need oxygen levels above 5% to perform the processes needed for composting.
Water is necessary in the right amounts to maintain activity without causing locally anaerobic conditions.
Certain ratios of these materials allow microorganisms to work at a rate that will heat up the compost pile. Active management of the pile (e.g., turning over the compost heap) is needed to maintain sufficient oxygen and the right moisture level. The air/water balance is critical to maintaining high temperatures until the materials are broken down.
Composting is most efficient with a carbon-to-nitrogen ratio of about 25:1. Hot composting focuses on retaining heat to increase the decomposition rate, thus producing compost more quickly. Rapid composting is favored by having a carbon-to-nitrogen ratio of about 30 carbon units or less. Above 30, the substrate is nitrogen starved. Below 15, it is likely to outgas a portion of nitrogen as ammonia.
Nearly all dead plant and animal materials have both carbon and nitrogen in different amounts. Fresh grass clippings have an average ratio of about 15:1 and dry autumn leaves about 50:1 depending upon species. Composting is an ongoing and dynamic process; adding new sources of carbon and nitrogen consistently, as well as active management, is important.
Organisms
Organisms can break down organic matter in compost if provided with the correct mixture of water, oxygen, carbon, and nitrogen. They fall into two broad categories: chemical decomposers, which perform chemical processes on the organic waste, and physical decomposers, which process the waste into smaller pieces through methods such as grinding, tearing, chewing, and digesting.
Chemical decomposers
Bacteria are the most abundant and important of all the microorganisms found in compost. Bacteria process carbon and nitrogen and excrete plant-available nutrients such as nitrogen, phosphorus, and magnesium. Depending on the phase of composting, mesophilic or thermophilic bacteria may be the most prominent.
Mesophilic bacteria get compost to the thermophilic stage through oxidation of organic material. Afterwards they cure it, which makes the fresh compost more bioavailable for plants.
Thermophilic bacteria do not reproduce and are not active between , yet are found throughout soil. They activate once the mesophilic bacteria have begun to break down organic matter and increase the temperature to their optimal range. They have been shown to enter soils via rainwater. They are present so broadly because of many factors, including their spores being resilient. Thermophilic bacteria thrive at higher temperatures, reaching in typical mixes. Large-scale composting operations, such as windrow composting, may exceed this temperature, potentially killing beneficial soil microorganisms but also pasteurizing the waste.
Actinomycetota are needed to break down paper products such as newspaper, bark, etc., and other large molecules such as lignin and cellulose that are more difficult to decompose. The "pleasant, earthy smell of compost" is attributed to Actinomycetota. They make carbon, ammonia, and nitrogen nutrients available to plants.
Fungi such as molds and yeasts help break down materials that bacteria cannot, especially cellulose and lignin in woody material.
Protozoa contribute to biodegradation of organic matter and consume inactive bacteria, fungi, and micro-organic particulates.
Physical decomposers
Ants create nests, making the soil more porous and transporting nutrients to different areas of the compost.
Beetles as grubs feed on decaying vegetables.
Earthworms ingest partly composted material and excrete worm castings, making nitrogen, calcium, phosphorus, and magnesium available to plants. The tunnels they create as they move through the compost also increase aeration and drainage.
Flies feed on almost all organic material and put bacteria into the compost. Their population is kept in check by mites and the thermophilic temperatures that are unsuitable for fly larvae.
Millipedes break down plant material.
Rotifers feed on plant particles.
Snails and slugs feed on living or fresh plant material. They should be removed from compost before use, as they can damage plants and crops.
Sow bugs feed on rotting wood and decaying vegetation.
Springtails feed on fungi, molds, and decomposing plants.
Phases of composting
Under ideal conditions, composting proceeds through three major phases:
Mesophilic phase: The initial, mesophilic phase is when the decomposition is carried out under moderate temperatures by mesophilic microorganisms.
Thermophilic phase: As the temperature rises, a second, thermophilic phase starts, in which various thermophilic bacteria carry out the decomposition under higher temperatures (.)
Maturation phase: As the supply of high-energy compounds dwindles, the temperature starts to decrease, and the mesophilic bacteria once again predominate in the maturation phase.
Hot and cold composting – impact on timing
The time required to compost material relates to the volume of material, the particle size of the inputs (e.g. wood chips break down faster than branches), and the amount of mixing and aeration. Generally, larger piles reach higher temperatures and remain in a thermophilic stage for days or weeks. This is hot composting and is the usual method for large-scale municipal facilities and agricultural operations.
The Berkeley method produces finished compost in 18 days. It requires assembly of at least of material at the outset and needs turning every two days after an initial four-day phase. Such short processes involve some changes to traditional methods, including smaller, more homogenized particle sizes in the input materials, controlling carbon-to-nitrogen ratio (C:N) at 30:1 or less, and careful monitoring of the moisture level.
Cold composting is a slower process that can take up to a year to complete. It results from smaller piles, including many residential compost piles that receive small amounts of kitchen and garden waste over extended periods. Piles smaller than tend not to reach and maintain high temperatures. Turning is not necessary with cold composting, although a risk exists that parts of the pile may go anaerobic as it becomes compacted or waterlogged.
Pathogen removal
Composting can destroy some pathogens and seeds, by reaching temperatures above .
Dealing with stabilized compost – i.e. composted material in which microorganisms have finished digesting the organic matter and the temperature has reached between – poses very little risk, as these temperatures kill pathogens and even make oocysts unviable. The temperature at which a pathogen dies depends on the pathogen, how long the temperature is maintained (seconds to weeks), and pH.
Compost products such as compost tea and compost extracts have been found to have an inhibitory effect on Fusarium oxysporum, Rhizoctonia species, and Pythium debaryanum, plant pathogens that can cause crop diseases. Aerated compost teas are more effective than compost extracts. The microbiota and enzymes present in compost extracts also have a suppressive effect on fungal plant pathogens. Compost is a good source of biocontrol agents like B. subtilis, B. licheniformis, and P. chrysogenum that fight plant pathogens. Sterilizing the compost, compost tea, or compost extracts reduces the effect of pathogen suppression.
Diseases that can be contracted from handling compost
When turning compost that has not gone through phases where temperatures above are reached, a mouth mask and gloves must be worn to protect from diseases that can be contracted from handling compost, including:
Aspergillosis
Farmer's lung
Histoplasmosis – a fungus that grows in guano and bird droppings
Legionnaires' disease
Paronychia – via infection around the fingernails and toenails
Tetanus – a central nervous system disease
Oocytes are rendered unviable by temperatures over .
Environmental benefits
Compost adds organic matter to the soil and increases the nutrient content and biodiversity of microbes in soil. Composting at home reduces the amount of green waste being hauled to dumps or composting facilities. The reduced volume of materials being picked up by trucks results in fewer trips, which in turn lowers the overall emissions from the waste-management fleet.
Materials that can be composted
Potential sources of compostable materials, or feedstocks, include residential, agricultural, and commercial waste streams. Residential food or yard waste can be composted at home, or collected for inclusion in a large-scale municipal composting facility. In some regions, it could also be included in a local or neighborhood composting project.
Organic solid waste
The two broad categories of organic solid waste are green and brown. Green waste is generally considered a source of nitrogen and includes pre- and post-consumer food waste, grass clippings, garden trimmings, and fresh leaves. Animal carcasses, roadkill, and butcher residue can also be composted, and these are considered nitrogen sources.
Brown waste is a carbon source. Typical examples are dried vegetation and woody material such as fallen leaves, straw, woodchips, limbs, logs, pine needles, sawdust, and wood ash, but not charcoal ash. Products derived from wood such as paper and plain cardboard are also considered carbon sources.
Animal manure and bedding
On many farms, the basic composting ingredients are animal manure generated on the farm as a nitrogen source, and bedding as the carbon source. Straw and sawdust are common bedding materials. Nontraditional bedding materials are also used, including newspaper and chopped cardboard. The amount of manure composted on a livestock farm is often determined by cleaning schedules, land availability, and weather conditions. Each type of manure has its own physical, chemical, and biological characteristics. Cattle and horse manures, when mixed with bedding, possess good qualities for composting. Swine manure, which is very wet and usually not mixed with bedding material, must be mixed with straw or similar raw materials. Poultry manure must be blended with high-carbon, low-nitrogen materials.
Human excreta
Human excreta, sometimes called "humanure" in the composting context, can be added as an input to the composting process since it is a nutrient-rich organic material. Nitrogen, which serves as a building block for important plant amino acids, is found in solid human waste. Phosphorus, which helps plants convert sunlight into energy in the form of ATP, can be found in liquid human waste.
Solid human waste can be collected directly in composting toilets, or indirectly in the form of sewage sludge after it has undergone treatment in a sewage treatment plant. Both processes require capable design, as potential health risks need to be managed. In the case of home composting, a wide range of microorganisms, including bacteria, viruses, and parasitic worms, can be present in feces, and improper processing can pose significant health risks. In the case of large sewage treatment facilities that collect wastewater from a range of residential, commercial and industrial sources, there are additional considerations. The composted sewage sludge, referred to as biosolids, can be contaminated with a variety of metals and pharmaceutical compounds. Insufficient processing of biosolids can also lead to problems when the material is applied to land.
Urine can be put on compost piles or directly used as fertilizer. Adding urine to compost can increase temperatures, so can increase its ability to destroy pathogens and unwanted seeds. Unlike feces, urine does not attract disease-spreading flies (such as houseflies or blowflies), and it does not contain the most hardy of pathogens, such as parasitic worm eggs.
Animal remains
Animal carcasses may be composted as a disposal option. Such material is rich in nitrogen.
Human bodies
Composting technologies
Industrial-scale composting
In-vessel composting
Aerated static-pile composting
Windrow composting
Other systems at household level
Hügelkultur (raised garden beds or mounds)
The practice of making raised garden beds or mounds filled with rotting wood is also called in German. It is in effect creating a nurse log that is covered with soil.
Benefits of Hügelkultur garden beds include water retention and warming of soil. Buried wood acts like a sponge as it decomposes, able to capture water and store it for later use by crops planted on top of the bed.
Composting toilets
Related technologies
Vermicompost (also called worm castings, worm humus, worm manure, or worm faeces) is the end product of the breakdown of organic matter by earthworms. These castings have been shown to contain reduced levels of contaminants and a higher saturation of nutrients than the organic materials before vermicomposting.
Black soldier fly (Hermetia illucens) larvae are able to rapidly consume large amounts of organic material and can be used to treat human waste. The resulting compost still contains nutrients and can be used for biogas production, or further traditional composting or vermicomposting
Bokashi is a fermentation process rather than a decomposition process, and so retains the feedstock's energy, nutrient and carbon contents. There must be sufficient carbohydrate for fermentation to complete and therefore the process is typically applied to food waste, including noncompostable items. Carbohydrate is transformed into lactic acid, which dissociates naturally to form lactate, a biological energy carrier. The preserved result is therefore readily consumed by soil microbes and from there by the entire soil food web, leading to a significant increase in soil organic carbon and turbation. The process completes in weeks and returns soil acidity to normal.
Co-composting is a technique that processes organic solid waste together with other input materials such as dewatered fecal sludge or sewage sludge.
Anaerobic digestion combined with mechanical sorting of mixed waste streams is increasingly being used in developed countries due to regulations controlling the amount of organic matter allowed in landfills. Treating biodegradable waste before it enters a landfill reduces global warming from fugitive methane; untreated waste breaks down anaerobically in a landfill, producing landfill gas that contains methane, a potent greenhouse gas. The methane produced in an anaerobic digester can be used as biogas.
Uses
Agriculture and gardening
On open ground for growing wheat, corn, soybeans, and similar crops, compost can be broadcast across the top of the soil using spreader trucks or spreaders pulled behind a tractor. It is expected that the spread layer is very thin (approximately ) and worked into the soil prior to planting. Application rates of or more are not unusual when trying to rebuild poor soils or control erosion. Due to the extremely high cost of compost per unit of nutrients in the United States, on-farm use is relatively rare since rates over 4 tons/acre may not be affordable. This results from an over-emphasis on "recycling organic matter" than on "sustainable nutrients." In countries such as Germany, where compost distribution and spreading are partially subsidized in the original waste fees, compost is used more frequently on open ground on the premise of nutrient "sustainability".
In plasticulture, strawberries, tomatoes, peppers, melons, and other fruits and vegetables are grown under plastic to control temperature, retain moisture and control weeds. Compost may be banded (applied in strips along rows) and worked into the soil prior to bedding and planting, be applied at the same time the beds are constructed and plastic laid down, or used as a top dressing.
Many crops are not seeded directly in the field but are started in seed trays in a greenhouse. When the seedlings reach a certain stage of growth, they are transplanted in the field. Compost may be part of the mix used to grow the seedlings, but is not normally used as the only planting substrate. The particular crop and the seeds' sensitivity to nutrients, salts, etc. dictates the ratio of the blend, and maturity is important to insure that oxygen deprivation will not occur or that no lingering phyto-toxins remain.
Compost can be added to soil, coir, or peat, as a tilth improver, supplying humus and nutrients. It provides a rich growing medium as absorbent material. This material contains moisture and soluble minerals, which provide support and nutrients. Although it is rarely used alone, plants can flourish from mixed soil that includes a mix of compost with other additives such as sand, grit, bark chips, vermiculite, perlite, or clay granules to produce loam. Compost can be tilled directly into the soil or growing medium to boost the level of organic matter and the overall fertility of the soil. Compost that is ready to be used as an additive is dark brown or even black with an earthy smell.
Generally, direct seeding into a compost is not recommended due to the speed with which it may dry, the possible presence of phytotoxins in immature compost that may inhibit germination, and the possible tie up of nitrogen by incompletely decomposed lignin. It is very common to see blends of 20–30% compost used for transplanting seedlings.
Compost can be used to increase plant immunity to diseases and pests.
Compost tea
Compost tea is made up of extracts of fermented water leached from composted materials. Composts can be either aerated or non-aerated depending on its fermentation process. Compost teas are generally produced from adding compost to water in a ratio of 1:4–1:10, occasionally stirring to release microbes.
There is debate about the benefits of aerating the mixture. Non-aerated compost tea is cheaper and less labor-intensive, but there are conflicting studies regarding the risks of phytotoxicity and human pathogen regrowth. Aerated compost tea brews faster and generates more microbes, but has potential for human pathogen regrowth, particularly when one adds additional nutrients to the mixture.
Field studies have shown the benefits of adding compost teas to crops due to organic matter input, increased nutrient availability, and increased microbial activity. They have also been shown to have a suppressive effect on plant pathogens and soil-borne diseases. The efficacy is influenced by a number of factors, such as the preparation process, the type of source the conditions of the brewing process, and the environment of the crops. Adding nutrients to compost tea can be beneficial for disease suppression, although it can trigger the regrowth of human pathogens like E. coli and Salmonella.
Compost extract
Compost extracts are unfermented or non-brewed extracts of leached compost contents dissolved in any solvent.
Commercial sale
Compost is sold as bagged potting mixes in garden centers and other outlets. This may include composted materials such as manure and peat but is also likely to contain loam, fertilizers, sand, grit, etc. Varieties include multi-purpose composts designed for most aspects of planting, John Innes formulations, grow bags, designed to have crops such as tomatoes directly planted into them. There are also a range of specialist composts available, e.g. for vegetables, orchids, houseplants, hanging baskets, roses, ericaceous plants, seedlings, potting on, etc.
Other
Compost can also be used for land and stream reclamation, wetland construction, and landfill cover.
The temperatures generated by compost can be used to heat greenhouses, such as by being placed around the outside edges.
Regulations
There are process and product guidelines in Europe that date to the early 1980s (Germany, the Netherlands, Switzerland) and only more recently in the UK and the US. In both these countries, private trade associations within the industry have established loose standards, some say as a stop-gap measure to discourage independent government agencies from establishing tougher consumer-friendly standards. Compost is regulated in Canada and Australia as well.
EPA Class A and B guidelines in the United States were developed solely to manage the processing and beneficial reuse of sludge, also now called biosolids, following the US EPA ban of ocean dumping. About 26 American states now require composts to be processed according to these federal protocols for pathogen and vector control, even though the application to non-sludge materials has not been scientifically tested. An example is that green waste composts are used at much higher rates than sludge composts were ever anticipated to be applied at. U.K guidelines also exist regarding compost quality, as well as Canadian, Australian, and the various European states.
In the United States, some compost manufacturers participate in a testing program offered by a private lobbying organization called the U.S. Composting Council. The USCC was originally established in 1991 by Procter & Gamble to promote composting of disposable diapers, following state mandates to ban diapers in landfills, which caused a national uproar. Ultimately the idea of composting diapers was abandoned, partly since it was not proven scientifically to be possible, and mostly because the concept was a marketing stunt in the first place. After this, composting emphasis shifted back to recycling organic wastes previously destined for landfills. There are no bonafide quality standards in America, but the USCC sells a seal called "Seal of Testing Assurance" (also called "STA"). For a considerable fee, the applicant may display the USCC logo on products, agreeing to volunteer to customers a current laboratory analysis that includes parameters such as nutrients, respiration rate, salt content, pH, and limited other indicators.
Many countries such as Wales and some individual cities such as Seattle and San Francisco require food and yard waste to be sorted for composting (San Francisco Mandatory Recycling and Composting Ordinance).
The USA is the only Western country that does not distinguish sludge-source compost from green-composts, and by default 50% of US states expect composts to comply in some manner with the federal EPA 503 rule promulgated in 1984 for sludge products.
There are health risk concerns about PFASs ("forever chemicals") levels in compost derived from sewage sledge sourced biosolids, and EPA has not set health risk standards for this. The Sierra Club recommends that home gardeners avoid the use of sewage sludge-base fertilizer and compost, in part due to potentially high levels of PFASs. The EPA PFAS Strategic Roadmap initiative, running from 2021 to 2024, will consider the full lifecycle of PFAS including health risks of PFAS in wastewater sludge.
History
Composting dates back to at least the early Roman Empire and was mentioned as early as Cato the Elder's 160 BCE piece . Traditionally, composting involved piling organic materials until the next planting season, at which time the materials would have decayed enough to be ready for use in the soil. Methodologies for organic composting were part of traditional agricultural systems around the world.
Composting began to modernize somewhat in the 1920s in Europe as a tool for organic farming. The first industrial station for the transformation of urban organic materials into compost was set up in Wels, Austria, in the year 1921. Early proponents of composting in farming include Rudolf Steiner, founder of a farming method called biodynamics, and Annie Francé-Harrar, who was appointed on behalf of the government in Mexico and supported the country in 1950–1958 to set up a large humus organization in the fight against erosion and soil degradation. Sir Albert Howard, who worked extensively in India on sustainable practices, and Lady Eve Balfour were also major proponents of composting. Modern scientific composting was imported to America by the likes of J. I. Rodale – founder of Rodale, Inc. Organic Gardening, and others involved in the organic farming movement.
See also
Carbon farming
Human composting
Organic farming
Permaculture
Soil science
Sustainable agriculture
Terra preta
Waste sorting
Zero waste
Related lists
List of composting systems
List of environment topics
List of sustainable agriculture topics
List of organic gardening and farming topics
References
Organic fertilizers
Waste management
Gardening aids
Sanitation
Soil improvers
Soil
Sustainable food system
Biodegradable waste management
Permaculture | Compost | [
"Chemistry"
] | 5,724 | [
"Biodegradation",
"Biodegradable waste management"
] |
5,974 | https://en.wikipedia.org/wiki/Corundum | Corundum is a crystalline form of aluminium oxide () typically containing traces of iron, titanium, vanadium, and chromium. It is a rock-forming mineral. It is a naturally transparent material, but can have different colors depending on the presence of transition metal impurities in its crystalline structure. Corundum has two primary gem varieties: ruby and sapphire. Rubies are red due to the presence of chromium, and sapphires exhibit a range of colors depending on what transition metal is present. A rare type of sapphire, padparadscha sapphire, is pink-orange.
The name "corundum" is derived from the Tamil-Dravidian word kurundam (ruby-sapphire) (appearing in Sanskrit as kuruvinda).
Because of corundum's hardness (pure corundum is defined to have 9.0 on the Mohs scale), it can scratch almost all other minerals. It is commonly used as an abrasive on sandpaper and on large tools used in machining metals, plastics, and wood. Emery, a variety of corundum with no value as a gemstone, is commonly used as an abrasive. It is a black granular form of corundum, in which the mineral is intimately mixed with magnetite, hematite, or hercynite.
In addition to its hardness, corundum has a density of , which is unusually high for a transparent mineral composed of the low-atomic mass elements aluminium and oxygen.
Geology and occurrence
Corundum occurs as a mineral in mica schist, gneiss, and some marbles in metamorphic terranes. It also occurs in low-silica igneous syenite and nepheline syenite intrusives. Other occurrences are as masses adjacent to ultramafic intrusives, associated with lamprophyre dikes and as large crystals in pegmatites. It commonly occurs as a detrital mineral in stream and beach sands because of its hardness and resistance to weathering. The largest documented single crystal of corundum measured about , and weighed . The record has since been surpassed by certain synthetic boules.
Corundum for abrasives is mined in Zimbabwe, Pakistan, Afghanistan, Russia, Sri Lanka, and India. Historically it was mined from deposits associated with dunites in North Carolina, US, and from a nepheline syenite in Craigmont, Ontario. Emery-grade corundum is found on the Greek island of Naxos and near Peekskill, New York, US. Abrasive corundum is synthetically manufactured from bauxite.
Four corundum axes dating to 2500 BC from the Liangzhu culture and Sanxingcun culture (the latter of which is located in Jintan District) have been discovered in China.
Synthetic corundum
In 1837, Marc Antoine Gaudin made the first synthetic rubies by reacting alumina at a high temperature with a small amount of chromium as a colourant.
In 1847, J. J. Ebelmen made white synthetic sapphires by reacting alumina in boric acid.
In 1877, Frenic and Freil made crystal corundum from which small stones could be cut. Frimy and Auguste Verneuil manufactured artificial ruby by fusing and with a little chromium at temperatures above .
In 1903, Verneuil announced that he could produce synthetic rubies on a commercial scale using this flame fusion process.
The Verneuil process allows the production of flawless single-crystal sapphire and ruby gems of much larger size than normally found in nature. It is also possible to grow gem-quality synthetic corundum by flux-growth and hydrothermal synthesis. Because of the simplicity of the methods involved in corundum synthesis, large quantities of these crystals have become available on the market at a fraction of the cost of natural stones.
Synthetic corundum has a lower environmental impact than natural corundum by avoiding destructive mining and conserving resources. However, its production is energy-intensive, contributing to carbon emissions if fossil fuels are used, and involves chemicals that can pose risks.
Apart from ornamental uses, synthetic corundum is also used to produce mechanical parts (tubes, rods, bearings, and other machined parts), scratch-resistant optics, scratch-resistant watch crystals, instrument windows for satellites and spacecraft (because of its transparency in the ultraviolet to infrared range), and laser components. For example, the KAGRA gravitational wave detector's main mirrors are sapphires, and Advanced LIGO considered sapphire mirrors. Corundum has also found use in the development of ceramic armour thanks to its high hardiness.
Structure and physical properties
Corundum crystallizes with trigonal symmetry in the space group and has the lattice parameters and at standard conditions. The unit cell contains six formula units.
The toughness of corundum is sensitive to surface roughness and crystallographic orientation. It may be 6–7 MPa·m for synthetic crystals, and around 4 MPa·m for natural.
In the lattice of corundum, the oxygen atoms form a slightly distorted hexagonal close packing, in which two-thirds of the octahedral sites between the oxygen ions are occupied by aluminium ions. The absence of aluminium ions from one of the three sites breaks the symmetry of the hexagonal close packing, reducing the space group symmetry to and the crystal class to trigonal. The structure of corundum is sometimes described as a pseudohexagonal structure.
The Young's modulus of corundum (sapphire) has been reported by many different sources with values varying between 300 and 500 GPa, but a commonly cited value used for calculations is 345 GPa. The Young's modulus is temperature dependent, and has been reported in the [0001] direction as 435 GPa at 323 K and 386 GPa at 1,273 K. The shear modulus of corundum is 145 GPa, and the bulk modulus is 240 GPa.
Single crystal corundum fibers have potential applications in high temperature composites, and the Young's modulus is highly dependent on the crystallographic orientation along the fiber axis. The fiber exhibits a max modulus of 461 GPa when the crystallographic c-axis [0001] is aligned with the fiber axis, and minimum moduli ~373 GPa when a direction 45° away from the c-axis is aligned with the fiber axis.
The hardness of corundum measured by indentation at low loads of 1-2 N has been reported as 22-23 GPa in major crystallographic planes: (0001) (basal plane), (100) (rhombohedral plane), (110) (prismatic plane), and (102). The hardness can drop significantly under high indentation loads. The drop with respect to load varies with the crystallographic plane due to the difference in crack resistance and propagation between directions. One extreme case is seen in the (0001) plane, where the hardness under high load (~1 kN) is nearly half the value under low load (1-2 N).
Polycrystalline corundum formed through sintering and treated with a hot isostatic press process can achieve grain sizes in the range of 0.55-0.7 μm, and has been measured to have four-point bending strength between 600 and 700 MPa and three-point bending strength between 750 and 900 MPa.
Structure type
Because of its prevalence, corundum has also become the name of a major structure type (corundum type) found in various binary and ternary compounds.
See also
Aluminium oxynitride
Gemstone
Spinel – natural and synthetic mineral often mistaken for corundum
References
Abrasives
Aluminium minerals
Corundum varieties
Hematite group
Industrial minerals
Luminescent minerals
Oxide minerals
Polymorphism (materials science)
Superhard materials
Trigonal minerals
Minerals in space group 167 | Corundum | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,687 | [
"Luminescence",
"Polymorphism (materials science)",
"Luminescent minerals",
"Materials science",
"Materials",
"Superhard materials",
"Matter"
] |
5,980 | https://en.wikipedia.org/wiki/Carbon%20sink | A carbon sink is a natural or artificial carbon sequestration process that "removes a greenhouse gas, an aerosol or a precursor of a greenhouse gas from the atmosphere". These sinks form an important part of the natural carbon cycle. An overarching term is carbon pool, which is all the places where carbon on Earth can be, i.e. the atmosphere, oceans, soil, florae, fossil fuel reservoirs and so forth. A carbon sink is a type of carbon pool that has the capability to take up more carbon from the atmosphere than it releases.
Globally, the two most important carbon sinks are vegetation and the ocean. Soil is an important carbon storage medium. Much of the organic carbon retained in the soil of agricultural areas has been depleted due to intensive farming. Blue carbon designates carbon that is fixed via certain marine ecosystems. Coastal blue carbon includes mangroves, salt marshes and seagrasses. These make up a majority of ocean plant life and store large quantities of carbon. Deep blue carbon is located in international waters and includes carbon contained in "continental shelf waters, deep-sea waters and the sea floor beneath them".
For climate change mitigation purposes, the maintenance and enhancement of natural carbon sinks, mainly soils and forests, is important. In the past, human practices like deforestation and industrial agriculture have depleted natural carbon sinks. This kind of land use change has been one of the causes of climate change.
Definition
In the context of climate change and in particular mitigation, a sink is defined as "Any process, activity or mechanism which removes a greenhouse gas, an aerosol or a precursor of a greenhouse gas from the atmosphere".
In the case of non- greenhouse gases, sinks need not store the gas. Instead they can break it down into substances that have a reduced effect on global warming. For example, nitrous oxide can be reduced to harmless N2.
Related terms are "carbon pool, reservoir, sequestration, source and uptake". The same publication defines carbon pool as "a reservoir in the Earth system where elements, such as carbon [...], reside in various chemical forms for a period of time."
Both carbon pools and carbon sinks are important concepts in understanding the carbon cycle, but they refer to slightly different things. A carbon pool can be thought of as the overarching term, and carbon sink is then a particular type of carbon pool: A carbon pool is all the places where carbon can be stored (for example the atmosphere, oceans, soil, plants, and fossil fuels).
Types
The amount of carbon dioxide varies naturally in a dynamic equilibrium with photosynthesis of land plants. The natural carbon sinks are:
Soil is a carbon store and active carbon sink.
Photosynthesis by terrestrial plants with grass and trees allows them to serve as carbon sinks during growing seasons.
Absorption of carbon dioxide by the oceans via solubility and biological pumps.
Artificial carbon sinks are those that store carbon in building materials or deep underground (geologic carbon sequestration). No major artificial systems remove carbon from the atmosphere on a large scale yet.
Public awareness of the significance of sinks has grown since passage of the 1997 Kyoto Protocol, which promotes their use as a form of carbon offset.
Natural carbon sinks
Soils
Soils represent a short to long-term carbon storage medium and contain more carbon than all terrestrial vegetation and the atmosphere combined. Plant litter and other biomass including charcoal accumulates as organic matter in soils, and is degraded by chemical weathering and biological degradation. More recalcitrant organic carbon polymers such as cellulose, hemi-cellulose, lignin, aliphatic compounds, waxes and terpenoids are collectively retained as humus.
Organic matter tends to accumulate in litter and soils of colder regions such as the boreal forests of North America and the Taiga of Russia. Leaf litter and humus are rapidly oxidized and poorly retained in sub-tropical and tropical climate conditions due to high temperatures and extensive leaching by rainfall. Areas, where shifting cultivation or slash and burn agriculture are practiced, are generally only fertile for two to three years before they are abandoned. These tropical jungles are similar to coral reefs in that they are highly efficient at conserving and circulating necessary nutrients, which explains their lushness in a nutrient desert.
Grasslands contribute to soil organic matter, stored mainly in their extensive fibrous root mats. Due in part to the climatic conditions of these regions (e.g., cooler temperatures and semi-arid to arid conditions), these soils can accumulate significant quantities of organic matter. This can vary based on rainfall, the length of the winter season, and the frequency of naturally occurring lightning-induced grass-fires. While these fires release carbon dioxide, they improve the quality of the grasslands overall, in turn increasing the amount of carbon retained in the humic material. They also deposit carbon directly into the soil in the form of biochar that does not significantly degrade back to carbon dioxide.
Much organic carbon retained in many agricultural areas worldwide has been severely depleted due to intensive farming practices. Since the 1850s, a large proportion of the world's grasslands have been tilled and converted to croplands, allowing the rapid oxidation of large quantities of soil organic carbon. Methods that significantly enhance carbon sequestration in soil are called carbon farming. They include for example no-till farming, residue mulching, cover cropping, and crop rotation.
Forests
Deep ocean, tidal marshes, mangroves and seagrasses
Enhancing natural carbon sinks
Purpose in the context of climate change
Carbon sequestration techniques in oceans
To enhance carbon sequestration processes in oceans the following technologies have been proposed but none have achieved large scale application so far: Seaweed farming, ocean fertilisation, artificial upwelling, basalt storage, mineralization and deep sea sediments, adding bases to neutralize acids. The idea of direct deep-sea carbon dioxide injection has been abandoned.
Artificial carbon sinks
Geologic carbon sequestration
Wooden buildings
Broad-base adoption of mass timber and their role in substituting steel and concrete in new mid-rise construction projects over the next few decades has the potential to turn timber buildings into carbon sinks, as they store the carbon dioxide taken up from the air by trees that are harvested and used as mass timber. This could result in storing between 10 million tons of carbon per year in the lowest scenario and close to 700 million tons in the highest scenario. For this to happen, the harvested forests would need to be sustainably managed and wood from demolished timber buildings would need to be reused or preserved on land in various forms.
See also
Carbon budget
Forest management
Reforestation
References
Carbon dioxide
Carbon dioxide removal
Photosynthesis
Gas technologies | Carbon sink | [
"Chemistry",
"Biology"
] | 1,358 | [
"Greenhouse gases",
"Carbon dioxide",
"Biochemistry",
"Photosynthesis"
] |
5,987 | https://en.wikipedia.org/wiki/Coal | Coal is a combustible black or brownish-black sedimentary rock, formed as rock strata called coal seams. Coal is mostly carbon with variable amounts of other elements, chiefly hydrogen, sulfur, oxygen, and nitrogen.
Coal is a type of fossil fuel, formed when dead plant matter decays into peat which is converted into coal by the heat and pressure of deep burial over millions of years. Vast deposits of coal originate in former wetlands called coal forests that covered much of the Earth's tropical land areas during the late Carboniferous (Pennsylvanian) and Permian times.
Coal is used primarily as a fuel. While coal has been known and used for thousands of years, its usage was limited until the Industrial Revolution. With the invention of the steam engine, coal consumption increased. In 2020, coal supplied about a quarter of the world's primary energy and over a third of its electricity. Some iron and steel-making and other industrial processes burn coal.
The extraction and burning of coal damages the environment, causing premature death and illness, and it is the largest anthropogenic source of carbon dioxide contributing to climate change. Fourteen billion tonnes of carbon dioxide were emitted by burning coal in 2020, which is 40% of total fossil fuel emissions and over 25% of total global greenhouse gas emissions. As part of worldwide energy transition, many countries have reduced or eliminated their use of coal power. The United Nations Secretary General asked governments to stop building new coal plants by 2020.
Global coal use was 8.3 billion tonnes in 2022, and is set to remain at record levels in 2023. To meet the Paris Agreement target of keeping global warming below coal use needs to halve from 2020 to 2030, and "phasing down" coal was agreed upon in the Glasgow Climate Pact.
The largest consumer and importer of coal in 2020 was China, which accounts for almost half the world's annual coal production, followed by India with about a tenth. Indonesia and Australia export the most, followed by Russia.
Etymology
The word originally took the form col in Old English, from reconstructed Proto-Germanic *kula(n), from Proto-Indo-European root *g(e)u-lo- "live coal". Germanic cognates include the Old Frisian , Middle Dutch , Dutch , Old High German , German and Old Norse . Irish is also a cognate via the Indo-European root.
Formation of coal
The conversion of dead vegetation into coal is called coalification. At various times in the geologic past, the Earth had dense forests in low-lying areas. In these wetlands, the process of coalification began when dead plant matter was protected from oxidation, usually by mud or acidic water, and was converted into peat. The resulting peat bogs, which trapped immense amounts of carbon, were eventually deeply buried by sediments. Then, over millions of years, the heat and pressure of deep burial caused the loss of water, methane and carbon dioxide and increased the proportion of carbon. The grade of coal produced depended on the maximum pressure and temperature reached, with lignite (also called "brown coal") produced under relatively mild conditions, and sub-bituminous coal, bituminous coal, or anthracite coal (also called "hard coal" or "black coal") produced in turn with increasing temperature and pressure.
Of the factors involved in coalification, temperature is much more important than either pressure or time of burial. Subbituminous coal can form at temperatures as low as while anthracite requires a temperature of at least .
Although coal is known from most geologic periods, 90% of all coal beds were deposited in the Carboniferous and Permian periods. Paradoxically, this was during the Late Paleozoic icehouse, a time of global glaciation. However, the drop in global sea level accompanying the glaciation exposed continental shelves that had previously been submerged, and to these were added wide river deltas produced by increased erosion due to the drop in base level. These widespread areas of wetlands provided ideal conditions for coal formation. The rapid formation of coal ended with the coal gap in the Permian–Triassic extinction event, where coal is rare.
Favorable geography alone does not explain the extensive Carboniferous coal beds. Other factors contributing to rapid coal deposition were high oxygen levels, above 30%, that promoted intense wildfires and formation of charcoal that was all but indigestible by decomposing organisms; high carbon dioxide levels that promoted plant growth; and the nature of Carboniferous forests, which included lycophyte trees whose determinate growth meant that carbon was not tied up in heartwood of living trees for long periods.
One theory suggested that about 360 million years ago, some plants evolved the ability to produce lignin, a complex polymer that made their cellulose stems much harder and more woody. The ability to produce lignin led to the evolution of the first trees. But bacteria and fungi did not immediately evolve the ability to decompose lignin, so the wood did not fully decay but became buried under sediment, eventually turning into coal. About 300 million years ago, mushrooms and other fungi developed this ability, ending the main coal-formation period of earth's history. Although some authors pointed at some evidence of lignin degradation during the Carboniferous, and suggested that climatic and tectonic factors were a more plausible explanation, reconstruction of ancestral enzymes by phylogenetic analysis corroborated a hypothesis that lignin degrading enzymes appeared in fungi approximately 200 MYa.
One likely tectonic factor was the Central Pangean Mountains, an enormous range running along the equator that reached its greatest elevation near this time. Climate modeling suggests that the Central Pangean Mountains contributed to the deposition of vast quantities of coal in the late Carboniferous. The mountains created an area of year-round heavy precipitation, with no dry season typical of a monsoon climate. This is necessary for the preservation of peat in coal swamps.
Coal is known from Precambrian strata, which predate land plants. This coal is presumed to have originated from residues of algae.
Sometimes coal seams (also known as coal beds) are interbedded with other sediments in a cyclothem. Cyclothems are thought to have their origin in glacial cycles that produced fluctuations in sea level, which alternately exposed and then flooded large areas of continental shelf.
Chemistry of coalification
The woody tissue of plants is composed mainly of cellulose, hemicellulose, and lignin. Modern peat is mostly lignin, with a content of cellulose and hemicellulose ranging from 5% to 40%. Various other organic compounds, such as waxes and nitrogen- and sulfur-containing compounds, are also present. Lignin has a weight composition of about 54% carbon, 6% hydrogen, and 30% oxygen, while cellulose has a weight composition of about 44% carbon, 6% hydrogen, and 49% oxygen. Bituminous coal has a composition of about 84.4% carbon, 5.4% hydrogen, 6.7% oxygen, 1.7% nitrogen, and 1.8% sulfur, on a weight basis. The low oxygen content of coal shows that coalification removed most of the oxygen and much of the hydrogen a process called carbonization.
Carbonization proceeds primarily by dehydration, decarboxylation, and demethanation. Dehydration removes water molecules from the maturing coal via reactions such as
2 R–OH → R–O–R + H2O
Decarboxylation removes carbon dioxide from the maturing coal:
RCOOH → RH + CO2
while demethanation proceeds by reaction such as
2 R-CH3 → R-CH2-R + CH4
R-CH2-CH2-CH2-R → R-CH=CH-R + CH4
In these formulas, R represents the remainder of a cellulose or lignin molecule to which the reacting groups are attached.
Dehydration and decarboxylation take place early in coalification, while demethanation begins only after the coal has already reached bituminous rank. The effect of decarboxylation is to reduce the percentage of oxygen, while demethanation reduces the percentage of hydrogen. Dehydration does both, and (together with demethanation) reduces the saturation of the carbon backbone (increasing the number of double bonds between carbon).
As carbonization proceeds, aliphatic compounds convert to aromatic compounds. Similarly, aromatic rings fuse into polyaromatic compounds (linked rings of carbon atoms). The structure increasingly resembles graphene, the structural element of graphite.
Chemical changes are accompanied by physical changes, such as decrease in average pore size.
Macerals
Macerals are coalified plant parts that retain the morphology and some properties of the original plant. In many coals, individual macerals can be identified visually. Some macerals include:
vitrinite, derived from woody parts
lipinite, derived from spores and algae
inertite, derived from woody parts that had been burnt in prehistoric times
huminite, a precursor to vitrinite.
In coalification huminite is replaced by vitreous (shiny) vitrinite. Maturation of bituminous coal is characterized by bitumenization, in which part of the coal is converted to bitumen, a hydrocarbon-rich gel. Maturation to anthracite is characterized by debitumenization (from demethanation) and the increasing tendency of the anthracite to break with a conchoidal fracture, similar to the way thick glass breaks.
Types
As geological processes apply pressure to dead biotic material over time, under suitable conditions, its metamorphic grade or rank increases successively into:
Peat, a precursor of coal
Lignite, or brown coal, the lowest rank of coal, most harmful to health when burned, used almost exclusively as fuel for electric power generation
Sub-bituminous coal, whose properties range between those of lignite and those of bituminous coal, is used primarily as fuel for steam-electric power generation.
Bituminous coal, a dense sedimentary rock, usually black, but sometimes dark brown, often with well-defined bands of bright and dull material. It is used primarily as fuel in steam-electric power generation and to make coke. Known as steam coal in the UK, and historically used to raise steam in steam locomotives and ships
Anthracite coal, the highest rank of coal, is a harder, glossy black coal used primarily for residential and commercial space heating.
Graphite, a difficult to ignite coal which is mostly used in pencils, or powdered for lubrication.
Cannel coal (sometimes called "candle coal"), a variety of fine-grained, high-rank coal with significant hydrogen content, which consists primarily of liptinite. It is related to boghead coal.
There are several international standards for coal. The classification of coal is generally based on the content of volatiles. However the most important distinction is between thermal coal (also known as steam coal), which is burnt to generate electricity via steam; and metallurgical coal (also known as coking coal), which is burnt at high temperature to make steel.
Hilt's law is a geological observation that (within a small area) the deeper the coal is found, the higher its rank (or grade). It applies if the thermal gradient is entirely vertical; however, metamorphism may cause lateral changes of rank, irrespective of depth. For example, some of the coal seams of the Madrid, New Mexico coal field were partially converted to anthracite by contact metamorphism from an igneous sill while the remainder of the seams remained as bituminous coal.
History
The earliest recognized use is from the Shenyang area of China where by 4000 BC Neolithic inhabitants had begun carving ornaments from black lignite. Coal from the Fushun mine in northeastern China was used to smelt copper as early as 1000 BC. Marco Polo, the Italian who traveled to China in the 13th century, described coal as "black stones ... which burn like logs", and said coal was so plentiful, people could take three hot baths a week. In Europe, the earliest reference to the use of coal as fuel is from the geological treatise On Stones (Lap. 16) by the Greek scientist Theophrastus (c. 371–287 BC):
Outcrop coal was used in Britain during the Bronze Age (3000–2000 BC), where it formed part of funeral pyres. In Roman Britain, with the exception of two modern fields, "the Romans were exploiting coals in all the major coalfields in England and Wales by the end of the second century AD". Evidence of trade in coal, dated to about AD 200, has been found at the Roman settlement at Heronbridge, near Chester; and in the Fenlands of East Anglia, where coal from the Midlands was transported via the Car Dyke for use in drying grain. Coal cinders have been found in the hearths of villas and Roman forts, particularly in Northumberland, dated to around AD 400. In the west of England, contemporary writers described the wonder of a permanent brazier of coal on the altar of Minerva at Aquae Sulis (modern day Bath), although in fact easily accessible surface coal from what became the Somerset coalfield was in common use in quite lowly dwellings locally. Evidence of coal's use for iron-working in the city during the Roman period has been found. In Eschweiler, Rhineland, deposits of bituminous coal were used by the Romans for the smelting of iron ore.
No evidence exists of coal being of great importance in Britain before about AD 1000, the High Middle Ages. Coal came to be referred to as "seacoal" in the 13th century; the wharf where the material arrived in London was known as Seacoal Lane, so identified in a charter of King Henry III granted in 1253. Initially, the name was given because much coal was found on the shore, having fallen from the exposed coal seams on cliffs above or washed out of underwater coal outcrops, but by the time of Henry VIII, it was understood to derive from the way it was carried to London by sea. In 1257–1259, coal from Newcastle upon Tyne was shipped to London for the smiths and lime-burners building Westminster Abbey. Seacoal Lane and Newcastle Lane, where coal was unloaded at wharves along the River Fleet, still exist.
These easily accessible sources had largely become exhausted (or could not meet the growing demand) by the 13th century, when underground extraction by shaft mining or adits was developed. The alternative name was "pitcoal", because it came from mines.
Cooking and home heating with coal (in addition to firewood or instead of it) has been done in various times and places throughout human history, especially in times and places where ground-surface coal was available and firewood was scarce, but a widespread reliance on coal for home hearths probably never existed until such a switch in fuels happened in London in the late sixteenth and early seventeenth centuries. Historian Ruth Goodman has traced the socioeconomic effects of that switch and its later spread throughout Britain and suggested that its importance in shaping the industrial adoption of coal has been previously underappreciated.
The development of the Industrial Revolution led to the large-scale use of coal, as the steam engine took over from the water wheel. In 1700, five-sixths of the world's coal was mined in Britain. Britain would have run out of suitable sites for watermills by the 1830s if coal had not been available as a source of energy. In 1947 there were some 750,000 miners in Britain, but the last deep coal mine in the UK closed in 2015.
A grade between bituminous coal and anthracite was once known as "steam coal" as it was widely used as a fuel for steam locomotives. In this specialized use, it is sometimes known as "sea coal" in the United States. Small "steam coal", also called dry small steam nuts (DSSN), was used as a fuel for domestic water heating.
Coal played an important role in industry in the 19th and 20th century. The predecessor of the European Union, the European Coal and Steel Community, was based on the trading of this commodity.
Coal continues to arrive on beaches around the world from both natural erosion of exposed coal seams and windswept spills from cargo ships. Many homes in such areas gather this coal as a significant, and sometimes primary, source of home heating fuel.
Composition
Coal consists mainly of a black mixture of diverse organic compounds and polymers. Of course, several kinds of coals exist, with variable dark colors and variable compositions. Young coals (brown coal, lignite) are not black. The two main black coals are bituminous, which is more abundant, and anthracite. The % carbon in coal follows the order anthracite > bituminous > lignite > brown coal. The fuel value of coal varies in the same order. Some anthracite deposits contain pure carbon in the form of graphite.
For bituminous coal, the elemental composition on a dry, ash-free basis of 84.4% carbon, 5.4% hydrogen, 6.7% oxygen, 1.7% nitrogen, and 1.8% sulfur, on a weight basis. This composition reflects partly the composition of the precursor plants. The second main fraction of coal is ash, an undesirable, noncombustable mixture of inorganic minerals. The composition of ash is often discussed in terms of oxides obtained after combustion in air:
Of particular interest is the sulfur content of coal, which can vary from less than 1% to as much as 4%. Most of the sulfur and most of the nitrogen is incorporated into the organic fraction in the form of organosulfur compounds and organonitrogen compounds. This sulfur and nitrogen are strongly bound within the hydrocarbon matrix. These elements are released as SO2 and NOx upon combustion. They cannot be removed, economically at least, otherwise. Some coals contain inorganic sulfur, mainly in the form of iron pyrite (FeS2). Being a dense mineral, it can be removed from coal by mechanical means, e.g. by froth flotation. Some sulfate occurs in coal, especially weathered samples. It is not volatilized and can be removed by washing.
Minor components include:
As minerals, Hg, As, and Se are not problematic to the environment, especially since they are only trace components. They become however mobile (volatile or water-soluble) when these minerals are combusted.
Uses
Most coal is used as fuel. 27.6% of world energy was supplied by coal in 2017 and Asia used almost three-quarters of it. Other large-scale applications also exist. The energy density of coal is roughly 24 megajoules per kilogram (approximately 6.7 kilowatt-hours per kg). For a coal power plant with a 40% efficiency, it takes an estimated of coal to power a 100 W lightbulb for one year.
Electricity generation
In 2022, 68% of global coal use was used for electricity generation.
Coal burnt in coal power stations to generate electricity is called thermal coal. It is usually pulverized and then burned in a furnace with a boiler. The furnace heat converts boiler water to steam, which is then used to spin turbines which turn generators and create electricity. The thermodynamic efficiency of this process varies between about 25% and 50% depending on the pre-combustion treatment, turbine technology (e.g. supercritical steam generator) and the age of the plant.
A few integrated gasification combined cycle (IGCC) power plants have been built, which burn coal more efficiently. Instead of pulverizing the coal and burning it directly as fuel in the steam-generating boiler, the coal is gasified to create syngas, which is burned in a gas turbine to produce electricity (just like natural gas is burned in a turbine). Hot exhaust gases from the turbine are used to raise steam in a heat recovery steam generator which powers a supplemental steam turbine. The overall plant efficiency when used to provide combined heat and power can reach as much as 94%. IGCC power plants emit less local pollution than conventional pulverized coal-fueled plants. Other ways to use coal are as coal-water slurry fuel (CWS), which was developed in the Soviet Union, or in an MHD topping cycle. However these are not widely used due to lack of profit.
In 2017 38% of the world's electricity came from coal, the same percentage as 30 years previously. In 2018 global installed capacity was 2TW (of which 1TW is in China) which was 30% of total electricity generation capacity. The most dependent major country is South Africa, with over 80% of its electricity generated by coal; but China alone generates more than half of the world's coal-generated electricity. Efforts around the world to reduce the use of coal have led some regions to switch to natural gas and renewable energy. In 2018 coal-fired power station capacity factor averaged 51%, that is they operated for about half their available operating hours.
Coke
Coke is a solid carbonaceous residue that is used in manufacturing steel and other iron-containing products. Coke is made when metallurgical coal (also known as coking coal) is baked in an oven without oxygen at temperatures as high as 1,000 °C, driving off the volatile constituents and fusing together the fixed carbon and residual ash. Metallurgical coke is used as a fuel and as a reducing agent in smelting iron ore in a blast furnace. The carbon monoxide produced by its combustion reduces hematite (an iron oxide) to iron.
Pig iron, which is too rich in dissolved carbon, is also produced.
The coke must be strong enough to resist the weight of overburden in the blast furnace, which is why coking coal is so important in making steel using the conventional route. Coke from coal is grey, hard, and porous and has a heating value of 29.6 MJ/kg. Some coke-making processes produce byproducts, including coal tar, ammonia, light oils, and coal gas.
Petroleum coke (petcoke) is the solid residue obtained in oil refining, which resembles coke but contains too many impurities to be useful in metallurgical applications.
Production of chemicals
Chemicals have been produced from coal since the 1950s. Coal can be used as a feedstock in the production of a wide range of chemical fertilizers and other chemical products. The main route to these products was coal gasification to produce syngas. Primary chemicals that are produced directly from the syngas include methanol, hydrogen, and carbon monoxide, which are the chemical building blocks from which a whole spectrum of derivative chemicals are manufactured, including olefins, acetic acid, formaldehyde, ammonia, urea, and others. The versatility of syngas as a precursor to primary chemicals and high-value derivative products provides the option of using coal to produce a wide range of commodities. In the 21st century, however, the use of coal bed methane is becoming more important.
Because the slate of chemical products that can be made via coal gasification can in general also use feedstocks derived from natural gas and petroleum, the chemical industry tends to use whatever feedstocks are most cost-effective. Therefore, interest in using coal tended to increase for higher oil and natural gas prices and during periods of high global economic growth that might have strained oil and gas production.
Coal to chemical processes require substantial quantities of water. Much coal to chemical production is in China where coal dependent provinces such as Shanxi are struggling to control its pollution.
Liquefaction
Coal can be converted directly into synthetic fuels equivalent to gasoline or diesel by hydrogenation or carbonization. Coal liquefaction emits more carbon dioxide than liquid fuel production from crude oil. Mixing in biomass and using carbon capture and storage (CCS) would emit slightly less than the oil process but at a high cost. State owned China Energy Investment runs a coal liquefaction plant and plans to build 2 more.
Coal liquefaction may also refer to the cargo hazard when shipping coal.
Gasification
Coal gasification, as part of an integrated gasification combined cycle (IGCC) coal-fired power station, is used to produce syngas, a mixture of carbon monoxide (CO) and hydrogen (H2) gas to fire gas turbines to produce electricity. Syngas can also be converted into transportation fuels, such as gasoline and diesel, through the Fischer–Tropsch process; alternatively, syngas can be converted into methanol, which can be blended into fuel directly or converted to gasoline via the methanol to gasoline process. Gasification combined with Fischer–Tropsch technology was used by the Sasol chemical company of South Africa to make chemicals and motor vehicle fuels from coal.
During gasification, the coal is mixed with oxygen and steam while also being heated and pressurized. During the reaction, oxygen and water molecules oxidize the coal into carbon monoxide (CO), while also releasing hydrogen gas (H2). This used to be done in underground coal mines, and also to make town gas, which was piped to customers to burn for illumination, heating, and cooking.
3C (as Coal) + O2 + H2O → H2 + 3CO
If the refiner wants to produce gasoline, the syngas is routed into a Fischer–Tropsch reaction. This is known as indirect coal liquefaction. If hydrogen is the desired end-product, however, the syngas is fed into the water gas shift reaction, where more hydrogen is liberated:
CO + H2O → CO2 + H2
Coal industry
Mining
About 8,000 Mt of coal are produced annually, about 90% of which is hard coal and 10% lignite. just over half is from underground mines. The coal mining industry employs almost 2.7 million workers. More accidents occur during underground mining than surface mining. Not all countries publish mining accident statistics so worldwide figures are uncertain, but it is thought that most deaths occur in coal mining accidents in China: in 2017 there were 375 coal mining related deaths in China. Most coal mined is thermal coal (also called steam coal as it is used to make steam to generate electricity) but metallurgical coal (also called "metcoal" or "coking coal" as it is used to make coke to make iron) accounts for 10% to 15% of global coal use.
As a traded commodity
China mines almost half the world's coal, followed by India with about a tenth. At 471 Mt and a 34% share of global exports, Indonesia was the largest exporter by volume in 2022, followed by Australia with 344 Mt and Russia with 224 Mt. Other major exporters of coal are the United States, South Africa, Colombia, and Canada. In 2022, China, India, and Japan were the biggest importers of coal, importing 301, 228, and 184 Mt respectively. Russia is increasingly orienting its coal exports from Europe to Asia as Europe transitions to renewable energy and subjects Russia to sanctions over its invasion of Ukraine.
The price of metallurgical coal is volatile and much higher than the price of thermal coal because metallurgical coal must be lower in sulfur and requires more cleaning. Coal futures contracts provide coal producers and the electric power industry an important tool for hedging and risk management.
In some countries, new onshore wind or solar generation already costs less than coal power from existing plants.
However, for China this is forecast for the early 2020s and for southeast Asia not until the late 2020s. In India, building new plants is uneconomic and, despite being subsidized, existing plants are losing market share to renewables.
In many countries in the Global North, there is a move away from the use of coal and former mine sites are being used as a tourist attraction.
Market trends
In 2022, China used 4520 Mt of coal, comprising more than half of global coal consumption. India, the European Union, and the United States, were the next largest consumers of coal, using 1162, 461, and 455 Mt respectively. Over the past decade, China has almost always accounted for the lion's share of the global growth in coal demand. Therefore, international market trends depend on Chinese energy policy.
Although the government effort to reduce air pollution in China means that the global long-term trend is to burn less coal, the short and medium term trends may differ, in part due to Chinese financing of new coal-fired power plants in other countries.
Preliminary analysis by International Energy Agency (IEA) indicates that global coal exports reached an all-time high in 2023. Through to 2026, the IEA expects global coal trade to decline by about 12%, driven by growing domestic production in coal-intensive economies such as China and India and coal phase-out plans elsewhere, such as in Europe. While thermal coal exports are expected to decline by about 16% by 2026, exports of metallurgical coal are expected to slightly increase by almost 2%.
Damage to human health
The use of coal as fuel causes health problems and deaths. The mining and processing of coal causes air and water pollution. Coal-powered plants emit nitrogen oxides, sulfur dioxide, particulate pollution, and heavy metals, which adversely affect human health. Coal bed methane extraction is important to avoid mining accidents.
The deadly London smog was caused primarily by the heavy use of coal. Globally coal is estimated to cause 800,000 premature deaths every year, mostly in India and China.
Burning coal is a major contributor to sulfur dioxide emissions, which creates PM2.5 particulates, the most dangerous form of air pollution.
Coal smokestack emissions cause asthma, strokes, reduced intelligence, artery blockages, heart attacks, congestive heart failure, cardiac arrhythmias, mercury poisoning, arterial occlusion, and lung cancer.
Annual health costs in Europe from use of coal to generate electricity are estimated at up to €43 billion.
In China, early deaths due to air pollution coal plants have been estimated at 200 per GW-year, however they may be higher around power plants where scrubbers are not used or lower if they are far from cities. Improvements to China's air quality and human health would grow with more stringent climate policies, mainly because the country's energy is so heavily reliant on coal. And there would be a net economic benefit.
A 2017 study in the Economic Journal found that for Britain during the period 1851–1860, "a one standard deviation increase in coal use raised infant mortality by 6–8% and that industrial coal use explains roughly one-third of the urban mortality penalty observed during this period."
Breathing in coal dust causes coalworker's pneumoconiosis or "black lung", so called because the coal dust literally turns the lungs black. In the US alone, it is estimated that 1,500 former employees of the coal industry die every year from the effects of breathing in coal mine dust.
Huge amounts of coal ash and other waste is produced annually. Use of coal generates hundreds of millions of tons of ash and other waste products every year. These include fly ash, bottom ash, and flue-gas desulfurization sludge, that contain mercury, uranium, thorium, arsenic, and other heavy metals, along with non-metals such as selenium.
Around 10% of coal is ash. Coal ash is hazardous and toxic to human beings and some other living things. Coal ash contains the radioactive elements uranium and thorium. Coal ash and other solid combustion byproducts are stored locally and escape in various ways that expose those living near coal plants to radiation and environmental toxics.
Damage to the environment
Coal mining, coal combustion wastes, and flue gas are causing major environmental damage.
Water systems are affected by coal mining. For example, the mining of coal affects groundwater and water table levels and acidity. Spills of fly ash, such as the Kingston Fossil Plant coal fly ash slurry spill, can also contaminate land and waterways, and destroy homes. Power stations that burn coal also consume large quantities of water. This can affect the flows of rivers, and has consequential impacts on other land uses. In areas of water scarcity, such as the Thar Desert in Pakistan, coal mining and coal power plants contribute to the depletion of water resources.
One of the earliest known impacts of coal on the water cycle was acid rain. In 2014, approximately 100 Tg/S of sulfur dioxide (SO2) was released, over half of which was from burning coal. After release, the sulfur dioxide is oxidized to H2SO4 which scatters solar radiation, hence its increase in the atmosphere exerts a cooling effect on the climate. This beneficially masks some of the warming caused by increased greenhouse gases. However, the sulfur is precipitated out of the atmosphere as acid rain in a matter of weeks, whereas carbon dioxide remains in the atmosphere for hundreds of years. Release of SO2 also contributes to the widespread acidification of ecosystems.
Disused coal mines can also cause issues. Subsidence can occur above tunnels, causing damage to infrastructure or cropland. Coal mining can also cause long lasting fires, and it has been estimated that thousands of coal seam fires are burning at any given time. For example, Brennender Berg has been burning since 1668, and is still burning in the 21st century.
The production of coke from coal produces ammonia, coal tar, and gaseous compounds as byproducts which if discharged to land, air or waterways can pollute the environment. The Whyalla steelworks is one example of a coke producing facility where liquid ammonia was discharged to the marine environment.
Climate change
The largest and most long-term effect of coal use is the release of carbon dioxide, a greenhouse gas that causes climate change. Coal-fired power plants were the single largest contributor to the growth in global CO2 emissions in 2018, 40% of the total fossil fuel emissions, and more than a quarter of total emissions. Coal mining can emit methane, another greenhouse gas.
In 2016 world gross carbon dioxide emissions from coal usage were 14.5 gigatonnes. For every megawatt-hour generated, coal-fired electric power generation emits around a tonne of carbon dioxide, which is double the approximately 500 kg of carbon dioxide released by a natural gas-fired electric plant. The emission intensity of coal varies with type and generator technology and exceeds 1200 g per kWh in some countries. In 2013, the head of the UN climate agency advised that most of the world's coal reserves should be left in the ground to avoid catastrophic global warming. To keep global warming below 1.5 °C or 2 °C hundreds, or possibly thousands, of coal-fired power plants will need to be retired early.
Underground fires
Thousands of coal fires are burning around the world. Those burning underground can be difficult to locate and many cannot be extinguished. Fires can cause the ground above to subside, their combustion gases are dangerous to life, and breaking out to the surface can initiate surface wildfires. Coal seams can be set on fire by spontaneous combustion or contact with a mine fire or surface fire. Lightning strikes are an important source of ignition. The coal continues to burn slowly back into the seam until oxygen (air) can no longer reach the flame front. A grass fire in a coal area can set dozens of coal seams on fire. Coal fires in China burn an estimated 120 million tons of coal a year, emitting 360 million metric tons of CO2, amounting to 2–3% of the annual worldwide production of CO2 from fossil fuels.
Pollution mitigation and carbon capture
Systems and technologies exist to mitigate the health and environmental impact of burning coal for energy.
Precombustion treatment
Refined coal is the product of a coal-upgrading technology that removes moisture and certain pollutants from lower-rank coals such as sub-bituminous and lignite (brown) coals. It is one form of several precombustion treatments and processes for coal that alter coal's characteristics before it is burned. Thermal efficiency improvements are achievable by improved pre-drying (especially relevant with high-moisture fuel such as lignite or biomass). The goals of precombustion coal technologies are to increase efficiency and reduce emissions when the coal is burned. Precombustion technology can sometimes be used as a supplement to postcombustion technologies to control emissions from coal-fueled boilers.
Post combustion approaches
Post combustion approaches to mitigate pollution include flue-gas desulfurization, selective catalytic reduction, electrostatic precipitators, and fly ash reduction.
Carbon capture and storage
Carbon capture and storage (CCS) can be used to capture carbon dioxide from the flue gas of coal power plants and bury it securely in an underground reservoir. Between 1972 and 2017, plans were made to add CCS to enough coal and gas power plants to sequester 161 million tonnes of per year, but by 2021 98% of these plans had failed. Cost, the absence of measures to address long-term liability for stored CO2, and limited social acceptability have all contributed to project cancellations. As of 2024, CCS is in operation at only four coal power plants and one gas power plant worldwide.
"Clean coal" and "abated coal"
Since the mid-1980s, the term "clean coal" has been widely used with various meanings. Initially, "clean coal technology" referred to scrubbers and catalytic converters that reduced the pollutants that cause acid rain. The scope then expanded to include reduction of other pollutants such as mercury. Recently, the term has come to encompass the use of CCS to reduce greenhouse gas emissions (GHG). In political discourse, the phrase "clean coal" is sometimes used to suggest that coal itself can be clean. This suggestion is false: Technologies to mitigate emissions are implemented in the plants where coal is processed and burned, but coal as a product is intrinsically dirty.
In discussions on greenhouse gas emissions, another common term is "abatement" of coal use. In the 2023 United Nations Climate Change Conference, an agreement was reached to phase down unabated coal use. Since the term abated was not defined, the agreement was criticized for being open to abuse. Without a clear definition, is possible for fossil fuel use to be called "abated" if it uses CCS only in a minimal fashion, such as capturing only 30% of the emissions from a plant.
The IPCC considers fossil fuels to be unabated if they are "produced and used without interventions that substantially reduce the amount of GHG emitted throughout the life-cycle; for example, capturing 90% or more from power plants."
Economics
In 2018 was invested in coal supply but almost all for sustaining production levels rather than opening new mines.
In the long term coal and oil could cost the world trillions of dollars per year. Coal alone may cost Australia billions, whereas costs to some smaller companies or cities could be on the scale of millions of dollars. The economies most damaged by coal (via climate change) may be India and the US as they are the countries with the highest social cost of carbon. Bank loans to finance coal are a risk to the Indian economy.
China is the largest producer of coal in the world. It is the world's largest energy consumer, and coal in China supplies 60% of its primary energy. However two fifths of China's coal power stations are estimated to be loss-making.
Air pollution from coal storage and handling costs the US almost 200 dollars for every extra ton stored, due to PM2.5. Coal pollution costs the each year. Measures to cut air pollution benefit individuals financially and the economies of countries such as China.
Subsidies
Subsidies for coal in 2021 have been estimated at , not including electricity subsidies, and are expected to rise in 2022. G20 countries provide at least of government support per year for the production of coal, including coal-fired power: many subsidies are impossible to quantify but they include in domestic and international public finance, in fiscal support, and in state-owned enterprise (SOE) investments per year. In the EU state aid to new coal-fired plants is banned from 2020, and to existing coal-fired plants from 2025. As of 2018, government funding for new coal power plants was supplied by Exim Bank of China, the Japan Bank for International Cooperation and Indian public sector banks. Coal in Kazakhstan was the main recipient of coal consumption subsidies totalling US$2 billion in 2017. Coal in Turkey benefited from substantial subsidies in 2021.
Stranded assets
Some coal-fired power stations could become stranded assets, for example China Energy Investment, the world's largest power company, risks losing half its capital. However, state-owned electricity utilities such as Eskom in South Africa, Perusahaan Listrik Negara in Indonesia, Sarawak Energy in Malaysia, Taipower in Taiwan, EGAT in Thailand, Vietnam Electricity and EÜAŞ in Turkey are building or planning new plants. As of 2021 this may be helping to cause a carbon bubble which could cause financial instability if it bursts.
Politics
Countries building or financing new coal-fired power stations, such as China, India, Indonesia, Vietnam, Turkey and Bangladesh, face mounting international criticism for obstructing the aims of the Paris Agreement. In 2019, the Pacific Island nations (in particular Vanuatu and Fiji) criticized Australia for failing to cut their emissions at a faster rate than they were, citing concerns about coastal inundation and erosion. In May 2021, the G7 members agreed to end new direct government support for international coal power generation.
Cultural usage
Coal is the official state mineral of Kentucky, and the official state rock of Utah and West Virginia. These US states have a historic link to coal mining.
Some cultures hold that children who misbehave will receive only a lump of coal from Santa Claus for Christmas in their stockings instead of presents.
It is also customary and considered lucky in Scotland to give coal as a gift on New Year's Day. This occurs as part of first-footing and represents warmth for the year to come.
See also
(stratigraphic unit)
Épinac coal mine
Notes
References
Further reading
External links
Coal Transitions
Coal – International Energy Agency
CoalExit
European Association for Coal and Lignite
Coal news and industry magazine
Global Coal Plant Tracker
Centre for Research on Energy and Clean Air
Coal mining
Economic geology
Fuels
Sedimentary rocks
Solid fuels
Fossil fuels | Coal | [
"Chemistry"
] | 8,843 | [
"Fuels",
"Chemical energy sources"
] |
5,993 | https://en.wikipedia.org/wiki/Chemical%20bond | A chemical bond is the association of atoms or ions to form molecules, crystals, and other structures. The bond may result from the electrostatic force between oppositely charged ions as in ionic bonds or through the sharing of electrons as in covalent bonds, or some combination of these effects. Chemical bonds are described as having different strengths: there are "strong bonds" or "primary bonds" such as covalent, ionic and metallic bonds, and "weak bonds" or "secondary bonds" such as dipole–dipole interactions, the London dispersion force, and hydrogen bonding.
Since opposite electric charges attract, the negatively charged electrons surrounding the nucleus and the positively charged protons within a nucleus attract each other. Electrons shared between two nuclei will be attracted to both of them. "Constructive quantum mechanical wavefunction interference" stabilizes the paired nuclei (see Theories of chemical bonding). Bonded nuclei maintain an optimal distance (the bond distance) balancing attractive and repulsive effects explained quantitatively by quantum theory.
The atoms in molecules, crystals, metals and other forms of matter are held together by chemical bonds, which determine the structure and properties of matter.
All bonds can be described by quantum theory, but, in practice, simplified rules and other theories allow chemists to predict the strength, directionality, and polarity of bonds. The octet rule and VSEPR theory are examples. More sophisticated theories are valence bond theory, which includes orbital hybridization and resonance, and molecular orbital theory which includes the linear combination of atomic orbitals and ligand field theory. Electrostatics are used to describe bond polarities and the effects they have on chemical substances.
Overview of main types of chemical bonds
A chemical bond is an attraction between atoms. This attraction may be seen as the result of different behaviors of the outermost or valence electrons of atoms. These behaviors merge into each other seamlessly in various circumstances, so that there is no clear line to be drawn between them. However it remains useful and customary to differentiate between different types of bond, which result in different properties of condensed matter.
In the simplest view of a covalent bond, one or more electrons (often a pair of electrons) are drawn into the space between the two atomic nuclei. Energy is released by bond formation. This is not as a result of reduction in potential energy, because the attraction of the two electrons to the two protons is offset by the electron-electron and proton-proton repulsions. Instead, the release of energy (and hence stability of the bond) arises from the reduction in kinetic energy due to the electrons being in a more spatially distributed (i.e. longer de Broglie wavelength) orbital compared with each electron being confined closer to its respective nucleus. These bonds exist between two particular identifiable atoms and have a direction in space, allowing them to be shown as single connecting lines between atoms in drawings, or modeled as sticks between spheres in models.
In a polar covalent bond, one or more electrons are unequally shared between two nuclei. Covalent bonds often result in the formation of small collections of better-connected atoms called molecules, which in solids and liquids are bound to other molecules by forces that are often much weaker than the covalent bonds that hold the molecules internally together. Such weak intermolecular bonds give organic molecular substances, such as waxes and oils, their soft bulk character, and their low melting points (in liquids, molecules must cease most structured or oriented contact with each other). When covalent bonds link long chains of atoms in large molecules, however (as in polymers such as nylon), or when covalent bonds extend in networks through solids that are not composed of discrete molecules (such as diamond or quartz or the silicate minerals in many types of rock) then the structures that result may be both strong and tough, at least in the direction oriented correctly with networks of covalent bonds. Also, the melting points of such covalent polymers and networks increase greatly.
In a simplified view of an ionic bond, the bonding electron is not shared at all, but transferred. In this type of bond, the outer atomic orbital of one atom has a vacancy which allows the addition of one or more electrons. These newly added electrons potentially occupy a lower energy-state (effectively closer to more nuclear charge) than they experience in a different atom. Thus, one nucleus offers a more tightly bound position to an electron than does another nucleus, with the result that one atom may transfer an electron to the other. This transfer causes one atom to assume a net positive charge, and the other to assume a net negative charge. The bond then results from electrostatic attraction between the positive and negatively charged ions. Ionic bonds may be seen as extreme examples of polarization in covalent bonds. Often, such bonds have no particular orientation in space, since they result from equal electrostatic attraction of each ion to all ions around them. Ionic bonds are strong (and thus ionic substances require high temperatures to melt) but also brittle, since the forces between ions are short-range and do not easily bridge cracks and fractures. This type of bond gives rise to the physical characteristics of crystals of classic mineral salts, such as table salt.
A less often mentioned type of bonding is metallic bonding. In this type of bonding, each atom in a metal donates one or more electrons to a "sea" of electrons that reside between many metal atoms. In this sea, each electron is free (by virtue of its wave nature) to be associated with a great many atoms at once. The bond results because the metal atoms become somewhat positively charged due to loss of their electrons while the electrons remain attracted to many atoms, without being part of any given atom. Metallic bonding may be seen as an extreme example of delocalization of electrons over a large system of covalent bonds, in which every atom participates. This type of bonding is often very strong (resulting in the tensile strength of metals). However, metallic bonding is more collective in nature than other types, and so they allow metal crystals to more easily deform, because they are composed of atoms attracted to each other, but not in any particularly-oriented ways. This results in the malleability of metals. The cloud of electrons in metallic bonding causes the characteristically good electrical and thermal conductivity of metals, and also their shiny lustre that reflects most frequencies of white light.
History
Early speculations about the nature of the chemical bond, from as early as the 12th century, supposed that certain types of chemical species were joined by a type of chemical affinity. In 1704, Sir Isaac Newton famously outlined his atomic bonding theory, in "Query 31" of his Opticks, whereby atoms attach to each other by some "force". Specifically, after acknowledging the various popular theories in vogue at the time, of how atoms were reasoned to attach to each other, i.e. "hooked atoms", "glued together by rest", or "stuck together by conspiring motions", Newton states that he would rather infer from their cohesion, that "particles attract one another by some force, which in immediate contact is exceedingly strong, at small distances performs the chemical operations, and reaches not far from the particles with any sensible effect."
In 1819, on the heels of the invention of the voltaic pile, Jöns Jakob Berzelius developed a theory of chemical combination stressing the electronegative and electropositive characters of the combining atoms. By the mid 19th century, Edward Frankland, F.A. Kekulé, A.S. Couper, Alexander Butlerov, and Hermann Kolbe, building on the theory of radicals, developed the theory of valency, originally called "combining power", in which compounds were joined owing to an attraction of positive and negative poles. In 1904, Richard Abegg proposed his rule that the difference between the maximum and minimum valencies of an element is often eight. At this point, valency was still an empirical number based only on chemical properties.
However the nature of the atom became clearer with Ernest Rutherford's 1911 discovery that of an atomic nucleus surrounded by electrons in which he quoted Nagaoka rejected Thomson's model on the grounds that opposite charges are impenetrable. In 1904, Nagaoka proposed an alternative planetary model of the atom in which a positively charged center is surrounded by a number of revolving electrons, in the manner of Saturn and its rings.
Nagaoka's model made two predictions:
a very massive atomic center (in analogy to a very massive planet)
electrons revolving around the nucleus, bound by electrostatic forces (in analogy to the rings revolving around Saturn, bound by gravitational forces.)
Rutherford mentions Nagaoka's model in his 1911 paper in which the atomic nucleus is proposed.
At the 1911 Solvay Conference, in the discussion of what could regulate energy differences between atoms, Max Planck stated: "The intermediaries could be the electrons." These nuclear models suggested that electrons determine chemical behavior.
Next came Niels Bohr's 1913 model of a nuclear atom with electron orbits. In 1916, chemist Gilbert N. Lewis developed the concept of electron-pair bonds, in which two atoms may share one to six electrons, thus forming the single electron bond, a single bond, a double bond, or a triple bond; in Lewis's own words, "An electron may form a part of the shell of two different atoms and cannot be said to belong to either one exclusively."
Also in 1916, Walther Kossel put forward a theory similar to Lewis' only his model assumed complete transfers of electrons between atoms, and was thus a model of ionic bonding. Both Lewis and Kossel structured their bonding models on that of Abegg's rule (1904).
Niels Bohr also proposed a model of the chemical bond in 1913. According to his model for a diatomic molecule, the electrons of the atoms of the molecule form a rotating ring whose plane is perpendicular to the axis of the molecule and equidistant from the atomic nuclei. The dynamic equilibrium of the molecular system is achieved through the balance of forces between the forces of attraction of nuclei to the plane of the ring of electrons and the forces of mutual repulsion of the nuclei. The Bohr model of the chemical bond took into account the Coulomb repulsion – the electrons in the ring are at the maximum distance from each other.
In 1927, the first mathematically complete quantum description of a simple chemical bond, i.e. that produced by one electron in the hydrogen molecular ion, H2+, was derived by the Danish physicist Øyvind Burrau. This work showed that the quantum approach to chemical bonds could be fundamentally and quantitatively correct, but the mathematical methods used could not be extended to molecules containing more than one electron. A more practical, albeit less quantitative, approach was put forward in the same year by Walter Heitler and Fritz London. The Heitler–London method forms the basis of what is now called valence bond theory. In 1929, the linear combination of atomic orbitals molecular orbital method (LCAO) approximation was introduced by Sir John Lennard-Jones, who also suggested methods to derive electronic structures of molecules of F2 (fluorine) and O2 (oxygen) molecules, from basic quantum principles. This molecular orbital theory represented a covalent bond as an orbital formed by combining the quantum mechanical Schrödinger atomic orbitals which had been hypothesized for electrons in single atoms. The equations for bonding electrons in multi-electron atoms could not be solved to mathematical perfection (i.e., analytically), but approximations for them still gave many good qualitative predictions and results. Most quantitative calculations in modern quantum chemistry use either valence bond or molecular orbital theory as a starting point, although a third approach, density functional theory, has become increasingly popular in recent years.
In 1933, H. H. James and A. S. Coolidge carried out a calculation on the dihydrogen molecule that, unlike all previous calculation which used functions only of the distance of the electron from the atomic nucleus, used functions which also explicitly added the distance between the two electrons. With up to 13 adjustable parameters they obtained a result very close to the experimental result for the dissociation energy. Later extensions have used up to 54 parameters and gave excellent agreement with experiments. This calculation convinced the scientific community that quantum theory could give agreement with experiment. However this approach has none of the physical pictures of the valence bond and molecular orbital theories and is difficult to extend to larger molecules.
Bonds in chemical formulas
Because atoms and molecules are three-dimensional, it is difficult to use a single method to indicate orbitals and bonds. In molecular formulas the chemical bonds (binding orbitals) between atoms are indicated in different ways depending on the type of discussion. Sometimes, some details are neglected. For example, in organic chemistry one is sometimes concerned only with the functional group of the molecule. Thus, the molecular formula of ethanol may be written in conformational form, three-dimensional form, full two-dimensional form (indicating every bond with no three-dimensional directions), compressed two-dimensional form (CH3–CH2–OH), by separating the functional group from another part of the molecule (C2H5OH), or by its atomic constituents (C2H6O), according to what is discussed. Sometimes, even the non-bonding valence shell electrons (with the two-dimensional approximate directions) are marked, e.g. for elemental carbon .'C'. Some chemists may also mark the respective orbitals, e.g. the hypothetical ethene−4 anion (\/C=C/\ −4) indicating the possibility of bond formation.
Strong chemical bonds
Strong chemical bonds are the intramolecular forces that hold atoms together in molecules. A strong chemical bond is formed from the transfer or sharing of electrons between atomic centers and relies on the electrostatic attraction between the protons in nuclei and the electrons in the orbitals.
The types of strong bond differ due to the difference in electronegativity of the constituent elements. Electronegativity is the tendency for an atom of a given chemical element to attract shared electrons when forming a chemical bond, where the higher the associated electronegativity then the more it attracts electrons. Electronegativity serves as a simple way to quantitatively estimate the bond energy, which characterizes a bond along the continuous scale from covalent to ionic bonding. A large difference in electronegativity leads to more polar (ionic) character in the bond.
Ionic bond
Ionic bonding is a type of electrostatic interaction between atoms that have a large electronegativity difference. There is no precise value that distinguishes ionic from covalent bonding, but an electronegativity difference of over 1.7 is likely to be ionic while a difference of less than 1.7 is likely to be covalent. Ionic bonding leads to separate positive and negative ions. Ionic charges are commonly between −3e to +3e. Ionic bonding commonly occurs in metal salts such as sodium chloride (table salt). A typical feature of ionic bonds is that the species form into ionic crystals, in which no ion is specifically paired with any single other ion in a specific directional bond. Rather, each species of ion is surrounded by ions of the opposite charge, and the spacing between it and each of the oppositely charged ions near it is the same for all surrounding atoms of the same type. It is thus no longer possible to associate an ion with any specific other single ionized atom near it. This is a situation unlike that in covalent crystals, where covalent bonds between specific atoms are still discernible from the shorter distances between them, as measured via such techniques as X-ray diffraction.
Ionic crystals may contain a mixture of covalent and ionic species, as for example salts of complex acids such as sodium cyanide, NaCN. X-ray diffraction shows that in NaCN, for example, the bonds between sodium cations (Na+) and the cyanide anions (CN−) are ionic, with no sodium ion associated with any particular cyanide. However, the bonds between the carbon (C) and nitrogen (N) atoms in cyanide are of the covalent type, so that each carbon is strongly bound to just one nitrogen, to which it is physically much closer than it is to other carbons or nitrogens in a sodium cyanide crystal.
When such crystals are melted into liquids, the ionic bonds are broken first because they are non-directional and allow the charged species to move freely. Similarly, when such salts dissolve into water, the ionic bonds are typically broken by the interaction with water but the covalent bonds continue to hold. For example, in solution, the cyanide ions, still bound together as single CN− ions, move independently through the solution, as do sodium ions, as Na+. In water, charged ions move apart because each of them are more strongly attracted to a number of water molecules than to each other. The attraction between ions and water molecules in such solutions is due to a type of weak dipole-dipole type chemical bond. In melted ionic compounds, the ions continue to be attracted to each other, but not in any ordered or crystalline way.
Covalent bond
Covalent bonding is a common type of bonding in which two or more atoms share valence electrons more or less equally. The simplest and most common type is a single bond in which two atoms share two electrons. Other types include the double bond, the triple bond, one- and three-electron bonds, the three-center two-electron bond and three-center four-electron bond.
In non-polar covalent bonds, the electronegativity difference between the bonded atoms is small, typically 0 to 0.3. Bonds within most organic compounds are described as covalent. The figure shows methane (CH4), in which each hydrogen forms a covalent bond with the carbon. See sigma bonds and pi bonds for LCAO descriptions of such bonding.
Molecules that are formed primarily from non-polar covalent bonds are often immiscible in water or other polar solvents, but much more soluble in non-polar solvents such as hexane.
A polar covalent bond is a covalent bond with a significant ionic character. This means that the two shared electrons are closer to one of the atoms than the other, creating an imbalance of charge. Such bonds occur between two atoms with moderately different electronegativities and give rise to dipole–dipole interactions. The electronegativity difference between the two atoms in these bonds is 0.3 to 1.7.
Single and multiple bonds
A single bond between two atoms corresponds to the sharing of one pair of electrons. The Hydrogen (H) atom has one valence electron. Two Hydrogen atoms can then form a molecule, held together by the shared pair of electrons. Each H atom now has the noble gas electron configuration of helium (He). The pair of shared electrons forms a single covalent bond. The electron density of these two bonding electrons in the region between the two atoms increases from the density of two non-interacting H atoms.
A double bond has two shared pairs of electrons, one in a sigma bond and one in a pi bond with electron density concentrated on two opposite sides of the internuclear axis. A triple bond consists of three shared electron pairs, forming one sigma and two pi bonds. An example is nitrogen. Quadruple and higher bonds are very rare and occur only between certain transition metal atoms.
Coordinate covalent bond (dipolar bond)
A coordinate covalent bond is a covalent bond in which the two shared bonding electrons are from the same one of the atoms involved in the bond. For example, boron trifluoride (BF3) and ammonia (NH3) form an adduct or coordination complex F3B←NH3 with a B–N bond in which a lone pair of electrons on N is shared with an empty atomic orbital on B. BF3 with an empty orbital is described as an electron pair acceptor or Lewis acid, while NH3 with a lone pair that can be shared is described as an electron-pair donor or Lewis base. The electrons are shared roughly equally between the atoms in contrast to ionic bonding. Such bonding is shown by an arrow pointing to the Lewis acid. (In the Figure, solid lines are bonds in the plane of the diagram, wedged bonds point towards the observer, and dashed bonds point away from the observer.)
Transition metal complexes are generally bound by coordinate covalent bonds. For example, the ion Ag+ reacts as a Lewis acid with two molecules of the Lewis base NH3 to form the complex ion Ag(NH3)2+, which has two Ag←N coordinate covalent bonds.
Metallic bonding
In metallic bonding, bonding electrons are delocalized over a lattice of atoms. By contrast, in ionic compounds, the locations of the binding electrons and their charges are static. The free movement or delocalization of bonding electrons leads to classical metallic properties such as luster (surface light reflectivity), electrical and thermal conductivity, ductility, and high tensile strength.
Intermolecular bonding
There are several types of weak bonds that can be formed between two or more molecules which are not covalently bound. Intermolecular forces cause molecules to attract or repel each other. Often, these forces influence physical characteristics (such as the melting point) of a substance.
Van der Waals forces are interactions between closed-shell molecules. They include both Coulombic interactions between partial charges in polar molecules, and Pauli repulsions between closed electrons shells.
Keesom forces are the forces between the permanent dipoles of two polar molecules. London dispersion forces are the forces between induced dipoles of different molecules. There can also be an interaction between a permanent dipole in one molecule and an induced dipole in another molecule.
Hydrogen bonds of the form A--H•••B occur when A and B are two highly electronegative atoms (usually N, O or F) such that A forms a highly polar covalent bond with H so that H has a partial positive charge, and B has a lone pair of electrons which is attracted to this partial positive charge and forms a hydrogen bond. Hydrogen bonds are responsible for the high boiling points of water and ammonia with respect to their heavier analogues. In some cases a similar halogen bond can be formed by a halogen atom located between two electronegative atoms on different molecules.
At short distances, repulsive forces between atoms also become important.
Theories of chemical bonding
In the (unrealistic) limit of "pure" ionic bonding, electrons are perfectly localized on one of the two atoms in the bond. Such bonds can be understood by classical physics. The force between the atoms depends on isotropic continuum electrostatic potentials. The magnitude of the force is in simple proportion to the product of the two ionic charges according to Coulomb's law.
Covalent bonds are better understood by valence bond (VB) theory or molecular orbital (MO) theory. The properties of the atoms involved can be understood using concepts such as oxidation number, formal charge, and electronegativity. The electron density within a bond is not assigned to individual atoms, but is instead delocalized between atoms. In valence bond theory, bonding is conceptualized as being built up from electron pairs that are localized and shared by two atoms via the overlap of atomic orbitals. The concepts of orbital hybridization and resonance augment this basic notion of the electron pair bond. In molecular orbital theory, bonding is viewed as being delocalized and apportioned in orbitals that extend throughout the molecule and are adapted to its symmetry properties, typically by considering linear combinations of atomic orbitals (LCAO). Valence bond theory is more chemically intuitive by being spatially localized, allowing attention to be focused on the parts of the molecule undergoing chemical change. In contrast, molecular orbitals are more "natural" from a quantum mechanical point of view, with orbital energies being physically significant and directly linked to experimental ionization energies from photoelectron spectroscopy. Consequently, valence bond theory and molecular orbital theory are often viewed as competing but complementary frameworks that offer different insights into chemical systems. As approaches for electronic structure theory, both MO and VB methods can give approximations to any desired level of accuracy, at least in principle. However, at lower levels, the approximations differ, and one approach may be better suited for computations involving a particular system or property than the other.
Unlike the spherically symmetrical Coulombic forces in pure ionic bonds, covalent bonds are generally directed and anisotropic. These are often classified based on their symmetry with respect to a molecular plane as sigma bonds and pi bonds. In the general case, atoms form bonds that are intermediate between ionic and covalent, depending on the relative electronegativity of the atoms involved. Bonds of this type are known as polar covalent bonds.
References
External links
W. Locke (1997). Introduction to Molecular Orbital Theory. Retrieved May 18, 2005.
Carl R. Nave (2005). HyperPhysics. Retrieved May 18, 2005.
Linus Pauling and the Nature of the Chemical Bond: A Documentary History. Retrieved February 29, 2008.
Quantum chemistry | Chemical bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 5,203 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
"Condensed matter physics",
" molecular",
"nan",
"Atomic",
"Chemical bonding",
" and optical physics"
] |
6,011 | https://en.wikipedia.org/wiki/Chomsky%20hierarchy | The Chomsky hierarchy in the fields of formal language theory, computer science, and linguistics, is a containment hierarchy of classes of formal grammars. A formal grammar describes how to form strings from a language's vocabulary (or alphabet) that are valid according to the language's syntax. The linguist Noam Chomsky theorized that four different classes of formal grammars existed that could generate increasingly complex languages. Each class can also completely generate the language of all inferior classes (set inclusive).
History
The general idea of a hierarchy of grammars was first described by Noam Chomsky in "Three models for the description of language" during the formalization of transformational-generative grammar (TGG). Marcel-Paul Schützenberger also played a role in the development of the theory of formal languages; the paper "The algebraic theory of context free languages" describes the modern hierarchy, including context-free grammars.
Independently, alongside linguists, mathematicians were developing models of computation (via automata). Parsing a sentence in a language is similar to computation, and the grammars described by Chomsky proved to both resemble and be equivalent in computational power to various machine models.
The hierarchy
The following table summarizes each of Chomsky's four types of grammars, the class of language it generates, the type of automaton that recognizes it, and the form its rules must have. The classes are defined by the constraints on the productions rules.
Note that the set of grammars corresponding to recursive languages is not a member of this hierarchy; these would be properly between Type-0 and Type-1.
Every regular language is context-free, every context-free language is context-sensitive, every context-sensitive language is recursive and every recursive language is recursively enumerable. These are all proper inclusions, meaning that there exist recursively enumerable languages that are not context-sensitive, context-sensitive languages that are not context-free and context-free languages that are not regular.
Regular (Type-3) grammars
Type-3 grammars generate the regular languages. Such a grammar restricts its rules to a single nonterminal on the left-hand side and a right-hand side consisting of a single terminal, possibly followed by a single nonterminal, in which case the grammar is right regular. Alternatively, all the rules can have their right-hand sides consist of a single terminal, possibly preceded by a single nonterminal (left regular). These generate the same languages. However, if left-regular rules and right-regular rules are combined, the language need no longer be regular. The rule is also allowed here if does not appear on the right side of any rule. These languages are exactly all languages that can be decided by a finite-state automaton. Additionally, this family of formal languages can be obtained by regular expressions. Regular languages are commonly used to define search patterns and the lexical structure of programming languages.
For example, the regular language is generated by the Type-3 grammar with the productions being the following.
Context-free (Type-2) grammars
Type-2 grammars generate the context-free languages. These are defined by rules of the form with being a nonterminal and being a string of terminals and/or nonterminals. These languages are exactly all languages that can be recognized by a non-deterministic pushdown automaton. Context-free languages—or rather its subset of deterministic context-free languages—are the theoretical basis for the phrase structure of most programming languages, though their syntax also includes context-sensitive name resolution due to declarations and scope. Often a subset of grammars is used to make parsing easier, such as by an LL parser.
For example, the context-free language is generated by the Type-2 grammar with the productions being the following.
The language is context-free but not regular (by the pumping lemma for regular languages).
Context-sensitive (Type-1) grammars
Type-1 grammars generate context-sensitive languages. These grammars have rules of the form with a nonterminal and , and strings of terminals and/or nonterminals. The strings and may be empty, but must be nonempty. The rule is allowed if does not appear on the right side of any rule. The languages described by these grammars are exactly all languages that can be recognized by a linear bounded automaton (a nondeterministic Turing machine whose tape is bounded by a constant times the length of the input.)
For example, the context-sensitive language is generated by the Type-1 grammar with the productions being the following.
The language is context-sensitive but not context-free (by the pumping lemma for context-free languages).
A proof that this grammar generates is sketched in the article on Context-sensitive grammars.
Recursively enumerable (Type-0) grammars
Type-0 grammars include all formal grammars. There are no constraints on the productions rules. They generate exactly all languages that can be recognized by a Turing machine, thus any language that is possible to be generated can be generated by a Type-0 grammar. These languages are also known as the recursively enumerable or Turing-recognizable languages. Note that this is different from the recursive languages, which can be decided by an always-halting Turing machine.
See also
Chomsky normal form
Citations
References
1956 in computing
Formal languages
Generative linguistics
Hierarchy, Chomsky | Chomsky hierarchy | [
"Mathematics"
] | 1,154 | [
"Formal languages",
"Mathematical logic"
] |
6,014 | https://en.wikipedia.org/wiki/Cathode-ray%20tube | A cathode-ray tube (CRT) is a vacuum tube containing one or more electron guns, which emit electron beams that are manipulated to display images on a phosphorescent screen. The images may represent electrical waveforms on an oscilloscope, a frame of video on an analog television set (TV), digital raster graphics on a computer monitor, or other phenomena like radar targets. A CRT in a TV is commonly called a picture tube. CRTs have also been used as memory devices, in which case the screen is not intended to be visible to an observer. The term cathode ray was used to describe electron beams when they were first discovered, before it was understood that what was emitted from the cathode was a beam of electrons.
In CRT TVs and computer monitors, the entire front area of the tube is scanned repeatedly and systematically in a fixed pattern called a raster. In color devices, an image is produced by controlling the intensity of each of three electron beams, one for each additive primary color (red, green, and blue) with a video signal as a reference. In modern CRT monitors and TVs the beams are bent by magnetic deflection, using a deflection yoke. Electrostatic deflection is commonly used in oscilloscopes.
The tube is a glass envelope which is heavy, fragile, and long from front screen face to rear end. Its interior must be close to a vacuum to prevent the emitted electrons from colliding with air molecules and scattering before they hit the tube's face. Thus, the interior is evacuated to less than a millionth of atmospheric pressure. As such, handling a CRT carries the risk of violent implosion that can hurl glass at great velocity. The face is typically made of thick lead glass or special barium-strontium glass to be shatter-resistant and to block most X-ray emissions. This tube makes up most of the weight of CRT TVs and computer monitors.
Since the early 2010s, CRTs have been superseded by flat-panel display technologies such as LCD, plasma display, and OLED displays which are cheaper to manufacture and run, as well as significantly lighter and thinner. Flat-panel displays can also be made in very large sizes whereas was about the largest size of a CRT.
A CRT works by electrically heating a tungsten coil which in turn heats a cathode in the rear of the CRT, causing it to emit electrons which are modulated and focused by electrodes. The electrons are steered by deflection coils or plates, and an anode accelerates them towards the phosphor-coated screen, which generates light when hit by the electrons.
History
Discoveries
Cathode rays were discovered by Julius Plücker and Johann Wilhelm Hittorf. Hittorf observed that some unknown rays were emitted from the cathode (negative electrode) which could cast shadows on the glowing wall of the tube, indicating the rays were travelling in straight lines. In 1890, Arthur Schuster demonstrated cathode rays could be deflected by electric fields, and William Crookes showed they could be deflected by magnetic fields. In 1897, J. J. Thomson succeeded in measuring the mass-to-charge ratio of cathode rays, showing that they consisted of negatively charged particles smaller than atoms, the first "subatomic particles", which had already been named electrons by Irish physicist George Johnstone Stoney in 1891.
The earliest version of the CRT was known as the "Braun tube", invented by the German physicist Ferdinand Braun in 1897. It was a cold-cathode diode, a modification of the Crookes tube with a phosphor-coated screen. Braun was the first to conceive the use of a CRT as a display device. The Braun tube became the foundation of 20th century TV.
In 1908, Alan Archibald Campbell-Swinton, fellow of the Royal Society (UK), published a letter in the scientific journal Nature, in which he described how "distant electric vision" could be achieved by using a cathode-ray tube (or "Braun" tube) as both a transmitting and receiving device. He expanded on his vision in a speech given in London in 1911 and reported in The Times and the Journal of the Röntgen Society.
The first cathode-ray tube to use a hot cathode was developed by John Bertrand Johnson (who gave his name to the term Johnson noise) and Harry Weiner Weinhart of Western Electric, and became a commercial product in 1922. The introduction of hot cathodes allowed for lower acceleration anode voltages and higher electron beam currents, since the anode now only accelerated the electrons emitted by the hot cathode, and no longer had to have a very high voltage to induce electron emission from the cold cathode.
Development
In 1926, Kenjiro Takayanagi demonstrated a CRT TV receiver with a mechanical video camera that received images with a 40-line resolution. By 1927, he improved the resolution to 100 lines, which was unrivaled until 1931. By 1928, he was the first to transmit human faces in half-tones on a CRT display.
In 1927, Philo Farnsworth created a TV prototype.
The CRT was named in 1929 by inventor Vladimir K. Zworykin. He was subsequently hired by RCA, which was granted a trademark for the term "Kinescope", RCA's term for a CRT, in 1932; it voluntarily released the term to the public domain in 1950.
In the 1930s, Allen B. DuMont made the first CRTs to last 1,000 hours of use, which was one of the factors that led to the widespread adoption of TV.
The first commercially made electronic TV sets with cathode-ray tubes were manufactured by Telefunken in Germany in 1934.
In 1947, the cathode-ray tube amusement device, the earliest known interactive electronic game as well as the first to incorporate a cathode-ray tube screen, was created.
From 1949 to the early 1960s, there was a shift from circular CRTs to rectangular CRTs, although the first rectangular CRTs were made in 1938 by Telefunken. While circular CRTs were the norm, European TV sets often blocked portions of the screen to make it appear somewhat rectangular while American sets often left the entire front of the CRT exposed or only blocked the upper and lower portions of the CRT.
In 1954, RCA produced some of the first color CRTs, the 15GP22 CRTs used in the CT-100, the first color TV set to be mass produced. The first rectangular color CRTs were also made in 1954. However, the first rectangular color CRTs to be offered to the public were made in 1963. One of the challenges that had to be solved to produce the rectangular color CRT was convergence at the corners of the CRT. In 1965, brighter rare earth phosphors began replacing dimmer and cadmium-containing red and green phosphors. Eventually blue phosphors were replaced as well.
The size of CRTs increased over time, from 20 inches in 1938, to 21 inches in 1955, 25 inches by 1974, 30 inches by 1980, 35 inches by 1985, and 43 inches by 1989. However, experimental 31 inch CRTs were made as far back as 1938.
In 1960, the Aiken tube was invented. It was a CRT in a flat-panel display format with a single electron gun. Deflection was electrostatic and magnetic, but due to patent problems, it was never put into production. It was also envisioned as a head-up display in aircraft. By the time patent issues were solved, RCA had already invested heavily in conventional CRTs.
1968 marked the release of Sony Trinitron brand with the model KV-1310, which was based on Aperture Grille technology. It was acclaimed to have improved the output brightness. The Trinitron screen was identical with its upright cylindrical shape due to its unique triple cathode single gun construction.
In 1987, flat-screen CRTs were developed by Zenith for computer monitors, reducing reflections and helping increase image contrast and brightness. Such CRTs were expensive, which limited their use to computer monitors. Attempts were made to produce flat-screen CRTs using inexpensive and widely available float glass.
In 1990, the first CRT with HD resolution, the Sony KW-3600HD, was released to the market. It is considered to be "historical material" by Japan's national museum.
The Sony KWP-5500HD, an HD CRT projection TV, was released in 1992.
In the mid-1990s, some 160 million CRTs were made per year.
In the mid-2000s, Canon and Sony presented the surface-conduction electron-emitter display and field-emission displays, respectively. They both were flat-panel displays that had one (SED) or several (FED) electron emitters per subpixel in place of electron guns. The electron emitters were placed on a sheet of glass and the electrons were accelerated to a nearby sheet of glass with phosphors using an anode voltage. The electrons were not focused, making each subpixel essentially a flood beam CRT. They were never put into mass production as LCD technology was significantly cheaper, eliminating the market for such displays.
The last large-scale manufacturer of (in this case, recycled) CRTs, Videocon, ceased in 2015. CRT TVs stopped being made around the same time.
In 2012, Samsung SDI and several other major companies were fined by the European Commission for price fixing of TV cathode-ray tubes.
The same occurred in 2015 in the US and in Canada in 2018.
Worldwide sales of CRT computer monitors peaked in 2000, at 90 million units, while those of CRT TVs peaked in 2005 at 130 million units.
Decline
Beginning in the late 1990s to the early 2000s, CRTs began to be replaced with LCDs, starting first with computer monitors smaller than 15 inches in size, largely because of their lower bulk. Among the first manufacturers to stop CRT production was Hitachi in 2001, followed by Sony in Japan in 2004, Flat-panel displays dropped in price and started significantly displacing cathode-ray tubes in the 2000s. LCD monitor sales began exceeding those of CRTs in 2003–2004 and LCD TV sales started exceeding those of CRTs in some markets in 2005. Samsung SDI stopped CRT production in 2012.
Despite being a mainstay of display technology for decades, CRT-based computer monitors and TVs are now obsolete. Demand for CRT screens dropped in the late 2000s. Despite efforts from Samsung and LG to make CRTs competitive with their LCD and plasma counterparts, offering slimmer and cheaper models to compete with similarly sized and more expensive LCDs, CRTs eventually became obsolete and were relegated to developing markets and vintage enthusiasts once LCDs fell in price, with their lower bulk, weight and ability to be wall mounted coming as advantages.
Some industries still use CRTs because it is too much effort, downtime, or cost to replace them, or there is no substitute available; a notable example is the airline industry. Planes such as the Boeing 747-400 and the Airbus A320 used CRT instruments in their glass cockpits instead of mechanical instruments. Airlines such as Lufthansa still use CRT technology, which also uses floppy disks for navigation updates. They are also used in some military equipment for similar reasons. , at least one company manufactures new CRTs for these markets.
A popular consumer usage of CRTs is for retrogaming. Some games are impossible to play without CRT display hardware. Light guns only work on CRTs because they depend on the progressive timing properties of CRTs. Another reason people use CRTs due to the natural blending of these displays. Some games designed for CRT displays exploit this, which allows them to look more aesthetically pleasing on these displays.
Constructions
Body
The body of a CRT is usually made up of three parts: A screen/faceplate/panel, a cone/funnel, and a neck. The joined screen, funnel and neck are known as the bulb or envelope.
The neck is made from a glass tube while the funnel and screen are made by pouring and then pressing glass into a mold. The glass, known as CRT glass or TV glass, needs special properties to shield against x-rays while providing adequate light transmission in the screen or being very electrically insulating in the funnel and neck. The formulation that gives the glass its properties is also known as the melt. The glass is of very high quality, being almost contaminant and defect free. Most of the costs associated with glass production come from the energy used to melt the raw materials into glass. Glass furnaces for CRT glass production have several taps to allow molds to be replaced without stopping the furnace, to allow production of CRTs of several sizes. Only the glass used on the screen needs to have precise optical properties.
The optical properties of the glass used on the screen affect color reproduction and purity in color CRTs. Transmittance, or how transparent the glass is, may be adjusted to be more transparent to certain colors (wavelengths) of light. Transmittance is measured at the center of the screen with a 546 nm wavelength light, and a 10.16mm thick screen. Transmittance goes down with increasing thickness. Standard transmittances for Color CRT screens are 86%, 73%, 57%, 46%, 42% and 30%. Lower transmittances are used to improve image contrast but they put more stress on the electron gun, requiring more power on the electron gun for a higher electron beam power to light the phosphors more brightly to compensate for the reduced transmittance. The transmittance must be uniform across the screen to ensure color purity. The radius (curvature) of screens has increased (grown less curved) over time, from 30 to 68 inches, ultimately evolving into completely flat screens, reducing reflections. The thickness of both curved and flat screens gradually increases from the center outwards, and with it, transmittance is gradually reduced. This means that flat-screen CRTs may not be completely flat on the inside.
The glass used in CRTs arrives from the glass factory to the CRT factory as either separate screens and funnels with fused necks, for Color CRTs, or as bulbs made up of a fused screen, funnel and neck. There were several glass formulations for different types of CRTs, that were classified using codes specific to each glass manufacturer. The compositions of the melts were also specific to each manufacturer. Those optimized for high color purity and contrast were doped with Neodymium, while those for monochrome CRTs were tinted to differing levels, depending on the formulation used and had transmittances of 42% or 30%. Purity is ensuring that the correct colors are activated (for example, ensuring that red is displayed uniformly across the screen) while convergence ensures that images are not distorted. Convergence may be modified using a cross hatch pattern.
CRT glass used to be made by dedicated companies such as AGC Inc., O-I Glass, Samsung Corning Precision Materials, Corning Inc., and Nippon Electric Glass; others such as Videocon, Sony for the US market and Thomson made their own glass.
The funnel and the neck are made of leaded potash-soda glass or lead silicate glass formulation to shield against x-rays generated by high voltage electrons as they decelerate after striking a target, such as the phosphor screen or shadow mask of a color CRT. The velocity of the electrons depends on the anode voltage of the CRT; the higher the voltage, the higher the speed. The amount of x-rays emitted by a CRT can also lowered by reducing the brightness of the image. Leaded glass is used because it is inexpensive, while also shielding heavily against x-rays, although some funnels may also contain barium. The screen is usually instead made out of a special lead-free silicate glass formulation with barium and strontium to shield against x-rays, as it doesn't brown unlike glass containing lead. Another glass formulation uses 2–3% of lead on the screen. Alternatively zirconium can also be used on the screen in combination with barium, instead of lead.
Monochrome CRTs may have a tinted barium-lead glass formulation in both the screen and funnel, with a potash-soda lead glass in the neck; the potash-soda and barium-lead formulations have different thermal expansion coefficients. The glass used in the neck must be an excellent electrical insulator to contain the voltages used in the electron optics of the electron gun, such as focusing lenses. The lead in the glass causes it to brown (darken) with use due to x-rays, usually the CRT cathode wears out due to cathode poisoning before browning becomes apparent. The glass formulation determines the highest possible anode voltage and hence the maximum possible CRT screen size. For color, maximum voltages are often 24–32 kV, while for monochrome it is usually 21 or 24.5 kV, limiting the size of monochrome CRTs to 21 inches, or ~1 kV per inch. The voltage needed depends on the size and type of CRT. Since the formulations are different, they must be compatible with one another, having similar thermal expansion coefficients. The screen may also have an anti-glare or anti-reflective coating, or be ground to prevent reflections. CRTs may also have an anti-static coating.
The leaded glass in the funnels of CRTs may contain 21–25% of lead oxide (PbO), The neck may contain 30–40% of lead oxide, and the screen may contain 12% of barium oxide, and 12% of strontium oxide. A typical CRT contains several kilograms of lead as lead oxide in the glass depending on its size; 12 inch CRTs contain 0.5 kg of lead in total while 32 inch CRTs contain up to 3 kg. Strontium oxide began being used in CRTs, its major application, in the 1970s. Before this, CRTs used lead on the faceplate.
Some early CRTs used a metal funnel insulated with polyethylene instead of glass with conductive material. Others had ceramic or blown Pyrex instead of pressed glass funnels. Early CRTs did not have a dedicated anode cap connection; the funnel was the anode connection, so it was live during operation.
The funnel is coated on the inside and outside with a conductive coating, making the funnel a capacitor, helping stabilize and filter the anode voltage of the CRT, and significantly reducing the amount of time needed to turn on a CRT. The stability provided by the coating solved problems inherent to early power supply designs, as they used vacuum tubes. Because the funnel is used as a capacitor, the glass used in the funnel must be an excellent electrical insulator (dielectric). The inner coating has a positive voltage (the anode voltage that can be several kV) while the outer coating is connected to ground. CRTs powered by more modern power supplies do not need to be connected to ground, due to the more robust design of modern power supplies. The value of the capacitor formed by the funnel is 5–10 nF, although at the voltage the anode is normally supplied with. The capacitor formed by the funnel can also suffer from dielectric absorption, similarly to other types of capacitors. Because of this CRTs have to be discharged before handling to prevent injury.
The depth of a CRT is related to its screen size. Usual deflection angles were 90° for computer monitor CRTs and small CRTs and 110° which was the standard in larger TV CRTs, with 120 or 125° being used in slim CRTs made since 2001–2005 in an attempt to compete with LCD TVs. Over time, deflection angles increased as they became practical, from 50° in 1938 to 110° in 1959, and 125° in the 2000s. 140° deflection CRTs were researched but never commercialized, as convergence problems were never resolved.
Size and weight
The size of a CRT can be measured by the screen's entire area (or face diagonal) or alternatively by only its viewable area (or diagonal) that is coated by phosphor and surrounded by black edges.
While the viewable area may be rectangular, the edges of the CRT may have a curvature (e.g. black stripe CRTs, first made by Toshiba in 1972) or the edges may be black and truly flat (e.g. Flatron CRTs), or the viewable area may follow the curvature of the edges of the CRT (with or without black edges or curved edges).
Small CRTs below 3 inches were made for handheld TVs such as the MTV-1 and viewfinders in camcorders. In these, there may be no black edges, that are however truly flat.
Most of the weight of a CRT comes from the thick glass screen, which comprises 65% of the total weight of a CRT and limits its practical size (see ). The funnel and neck glass comprise the remaining 30% and 5% respectively. The glass in the funnel can vary in thickness, to join the thin neck with the thick screen. Chemically or thermally tempered glass may be used to reduce the weight of the CRT glass.
Anode
The outer conductive coating is connected to ground while the inner conductive coating is connected using the anode button/cap through a series of capacitors and diodes (a Cockcroft–Walton generator) to the high voltage flyback transformer; the inner coating is the anode of the CRT, which, together with an electrode in the electron gun, is also known as the final anode. The inner coating is connected to the electrode using springs. The electrode forms part of a bipotential lens. The capacitors and diodes serve as a voltage multiplier for the current delivered by the flyback.
For the inner funnel coating, monochrome CRTs use aluminum while color CRTs use aquadag; Some CRTs may use iron oxide on the inside. On the outside, most CRTs (but not all) use aquadag. Aquadag is an electrically conductive graphite-based paint. In color CRTs, the aquadag is sprayed onto the interior of the funnel whereas historically aquadag was painted into the interior of monochrome CRTs.
The anode is used to accelerate the electrons towards the screen and also collects the secondary electrons that are emitted by the phosphor particles in the vacuum of the CRT.
The anode cap connection in modern CRTs must be able to handle up to 55–60kV depending on the size and brightness of the CRT. Higher voltages allow for larger CRTs, higher image brightness, or a tradeoff between the two. It consists of a metal clip that expands on the inside of an anode button that is embedded on the funnel glass of the CRT. The connection is insulated by a silicone suction cup, possibly also using silicone grease to prevent corona discharge.
The anode button must be specially shaped to establish a hermetic seal between the button and funnel. X-rays may leak through the anode button, although that may not be the case in newer CRTs starting from the late 1970s to early 1980s, thanks to a new button and clip design. The button may consist of a set of 3 nested cups, with the outermost cup being made of a Nickel–Chromium–Iron alloy containing 40–49% of Nickel and 3–6% of Chromium to make the button easy to fuse to the funnel glass, with a first inner cup made of thick inexpensive iron to shield against x-rays, and with the second innermost cup also being made of iron or any other electrically conductive metal to connect to the clip. The cups must be heat resistant enough and have similar thermal expansion coefficients similar to that of the funnel glass to withstand being fused to the funnel glass. The inner side of the button is connected to the inner conductive coating of the CRT. The anode button may be attached to the funnel while its being pressed into shape in a mold. Alternatively, the x-ray shielding may instead be built into the clip.
The flyback transformer is also known as an IHVT (Integrated High Voltage Transformer) if it includes a voltage multiplier. The flyback uses a ceramic or powdered iron core to enable efficient operation at high frequencies. The flyback contains one primary and many secondary windings that provide several different voltages. The main secondary winding supplies the voltage multiplier with voltage pulses to ultimately supply the CRT with the high anode voltage it uses, while the remaining windings supply the CRT's filament voltage, keying pulses, focus voltage and voltages derived from the scan raster. When the transformer is turned off, the flyback's magnetic field quickly collapses which induces high voltage in its windings. The speed at which the magnetic field collapses determines the voltage that is induced, so the voltage increases alongside its speed. A capacitor (Retrace Timing Capacitor) or series of capacitors (to provide redundancy) is used to slow the collapse of the magnetic field.
The design of the high voltage power supply in a product using a CRT has an influence in the amount of x-rays emitted by the CRT. The amount of emitted x-rays increases with both higher voltages and currents. If the product such as a TV set uses an unregulated high voltage power supply, meaning that anode and focus voltage go down with increasing electron current when displaying a bright image, the amount of emitted x-rays is as its highest when the CRT is displaying a moderately bright images, since when displaying dark or bright images, the higher anode voltage counteracts the lower electron beam current and vice versa respectively. The high voltage regulator and rectifier vacuum tubes in some old CRT TV sets may also emit x-rays.
Electron gun
The electron gun emits the electrons that ultimately hit the phosphors on the screen of the CRT. The electron gun contains a heater, which heats a cathode, which generates electrons that, using grids, are focused and ultimately accelerated into the screen of the CRT. The acceleration occurs in conjunction with the inner aluminum or aquadag coating of the CRT. The electron gun is positioned so that it aims at the center of the screen. It is inside the neck of the CRT, and it is held together and mounted to the neck using glass beads or glass support rods, which are the glass strips on the electron gun. The electron gun is made separately and then placed inside the neck through a process called "winding", or sealing. The electron gun has a glass wafer that is fused to the neck of the CRT. The connections to the electron gun penetrate the glass wafer. Once the electron gun is inside the neck, its metal parts (grids) are arced between each other using high voltage to smooth any rough edges in a process called spot knocking, to prevent the rough edges in the grids from generating secondary electrons.
Construction and method of operation
The electron gun has an indirectly heated hot cathode that is heated by a tungsten filament heating element; the heater may draw 0.5–2 A of current depending on the CRT. The voltage applied to the heater can affect the life of the CRT. Heating the cathode energizes the electrons in it, aiding electron emission, while at the same time current is supplied to the cathode; typically anywhere from 140 mA at 1.5 V to 600 mA at 6.3 V. The cathode creates an electron cloud (emits electrons) whose electrons are extracted, accelerated and focused into an electron beam. Color CRTs have three cathodes: one for red, green and blue. The heater sits inside the cathode but does not touch it; the cathode has its own separate electrical connection. The cathode is a material coated onto a piece of nickel which provides the electrical connection and structural support; the heater sits inside this piece without touching it.
There are several short circuits that can occur in a CRT electron gun. One is a heater-to-cathode short, that causes the cathode to permanently emit electrons which may cause an image with a bright red, green or blue tint with retrace lines, depending on the cathode (s) affected. Alternatively, the cathode may short to the control grid, possibly causing similar effects, or, the control grid and screen grid (G2) can short causing a very dark image or no image at all. The cathode may be surrounded by a shield to prevent sputtering.
The cathode is a layer of barium oxide which is coated on a piece of nickel for electrical and mechanical support. The barium oxide must be activated by heating to enable it to release electrons. Activation is necessary because barium oxide is not stable in air, so it is applied to the cathode as barium carbonate, which cannot emit electrons. Activation heats the barium carbonate to decompose it into barium oxide and carbon dioxide while forming a thin layer of metallic barium on the cathode. Activation is done when forming the vacuum (described in ). After activation, the oxide can become damaged by several common gases such as water vapor, carbon dioxide, and oxygen. Alternatively, barium strontium calcium carbonate may be used instead of barium carbonate, yielding barium, strontium and calcium oxides after activation. During operation, the barium oxide is heated to 800–1000°C, at which point it starts shedding electrons.
Since it is a hot cathode, it is prone to cathode poisoning, which is the formation of a positive ion layer that prevents the cathode from emitting electrons, reducing image brightness significantly or completely and causing focus and intensity to be affected by the frequency of the video signal preventing detailed images from being displayed by the CRT. The positive ions come from leftover air molecules inside the CRT or from the cathode itself that react over time with the surface of the hot cathode. Reducing metals such as manganese, zirconium, magnesium, aluminum or titanium may be added to the piece of nickel to lengthen the life of the cathode, as during activation, the reducing metals diffuse into the barium oxide, improving its lifespan, especially at high electron beam currents. In color CRTs with red, green and blue cathodes, one or more cathodes may be affected independently of the others, causing total or partial loss of one or more colors. CRTs can wear or burn out due to cathode poisoning. Cathode poisoning is accelerated by increased cathode current (overdriving). In color CRTs, since there are three cathodes, one for red, green and blue, a single or more poisoned cathode may cause the partial or complete loss of one or more colors, tinting the image. The layer may also act as a capacitor in series with the cathode, inducing thermal lag. The cathode may instead be made of scandium oxide or incorporate it as a dopant, to delay cathode poisoning, extending the life of the cathode by up to 15%.
The amount of electrons generated by the cathodes is related to their surface area. A cathode with more surface area creates more electrons, in a larger electron cloud, which makes focusing the electron cloud into an electron beam more difficult. Normally, only a part of the cathode emits electrons unless the CRT displays images with parts that are at full image brightness; only the parts at full brightness cause all of the cathode to emit electrons. The area of the cathode that emits electrons grows from the center outwards as brightness increases, so cathode wear may be uneven. When only the center of the cathode is worn, the CRT may light brightly those parts of images that have full image brightness but not show darker parts of images at all, in such a case the CRT displays a poor gamma characteristic.
A negative current is applied to the first (control) grid (G1) to converge the electrons from the hot cathode, creating an electron beam. G1 in practice is a Wehnelt cylinder. The brightness of the screen is not controlled by varying the anode voltage nor the electron beam current (they are never varied) despite them having an influence on image brightness, rather image brightness is controlled by varying the difference in voltage between the cathode and the G1 control grid. The second (screen) grid of the gun (G2) then accelerates the electrons towards the screen using several hundred DC volts. Then a third grid (G3) electrostatically focuses the electron beam before it is deflected and later accelerated by the anode voltage onto the screen. Electrostatic focusing of the electron beam may be accomplished using an einzel lens energized at up to 600 volts. Before electrostatic focusing, focusing the electron beam required a large, heavy and complex mechanical focusing system placed outside the electron gun.
However, electrostatic focusing cannot be accomplished near the final anode of the CRT due to its high voltage in the dozens of Kilovolts, so a high voltage (≈600–8000 V) electrode, together with an electrode at the final anode voltage of the CRT, may be used for focusing instead. Such an arrangement is called a bipotential lens, which also offers higher performance than an einzel lens, or, focusing may be accomplished using a magnetic focusing coil together with a high anode voltage of dozens of kilovolts. However, magnetic focusing is expensive to implement, so it is rarely used in practice. Some CRTs may use two grids and lenses to focus the electron beam. The focus voltage is generated in the flyback using a subset of the flyback's high voltage winding in conjunction with a resistive voltage divider. The focus electrode is connected alongside the other connections that are in the neck of the CRT.
There is a voltage called cutoff voltage which is the voltage that creates black on the screen since it causes the image on the screen created by the electron beam to disappear, the voltage is applied to G1. In a color CRT with three guns, the guns have different cutoff voltages. Many CRTs share grid G1 and G2 across all three guns, increasing image brightness and simplifying adjustment since on such CRTs there is a single cutoff voltage for all three guns (since G1 is shared across all guns). but placing additional stress on the video amplifier used to feed video into the electron gun's cathodes, since the cutoff voltage becomes higher. Monochrome CRTs do not suffer from this problem. In monochrome CRTs video is fed to the gun by varying the voltage on the first control grid.
During retracing of the electron beam, the preamplifier that feeds the video amplifier is disabled and the video amplifier is biased to a voltage higher than the cutoff voltage to prevent retrace lines from showing, or G1 can have a large negative voltage applied to it to prevent electrons from getting out of the cathode. This is known as blanking. (see Vertical blanking interval and Horizontal blanking interval.) Incorrect biasing can lead to visible retrace lines on one or more colors, creating retrace lines that are tinted or white (for example, tinted red if the red color is affected, tinted magenta if the red and blue colors are affected, and white if all colors are affected). Alternatively, the amplifier may be driven by a video processor that also introduces an OSD (On Screen Display) into the video stream that is fed into the amplifier, using a fast blanking signal. TV sets and computer monitors that incorporate CRTs need a DC restoration circuit to provide a video signal to the CRT with a DC component, restoring the original brightness of different parts of the image.
The electron beam may be affected by the Earth's magnetic field, causing it to normally enter the focusing lens off-center; this can be corrected using astigmation controls. Astigmation controls are both magnetic and electronic (dynamic); magnetic does most of the work while electronic is used for fine adjustments. One of the ends of the electron gun has a glass disk, the edges of which are fused with the edge of the neck of the CRT, possibly using frit; the metal leads that connect the electron gun to the outside pass through the disk.
Some electron guns have a quadrupole lens with dynamic focus to alter the shape and adjust the focus of the electron beam, varying the focus voltage depending on the position of the electron beam to maintain image sharpness across the entire screen, specially at the corners. They may also have a bleeder resistor to derive voltages for the grids from the final anode voltage.
After the CRTs were manufactured, they were aged to allow cathode emission to stabilize.
The electron guns in color CRTs are driven by a video amplifier which takes a signal per color channel and amplifies it to 40–170 V per channel, to be fed into the electron gun's cathodes; each electron gun has its own channel (one per color) and all channels may be driven by the same amplifier, which internally has three separate channels. The amplifier's capabilities limit the resolution, refresh rate and contrast ratio of the CRT, as the amplifier needs to provide high bandwidth and voltage variations at the same time; higher resolutions and refresh rates need higher bandwidths (speed at which voltage can be varied and thus switching between black and white) and higher contrast ratios need higher voltage variations or amplitude for lower black and higher white levels. 30 MHz of bandwidth can usually provide 720p or 1080i resolution, while 20 MHz usually provides around 600 (horizontal, from top to bottom) lines of resolution, for example. The difference in voltage between the cathode and the control grid is what modulates the electron beam, modulating its current and thus creating shades of colors which create the image line by line and this can also affect the brightness of the image. The phosphors used in color CRTs produce different amounts of light for a given amount of energy, so to produce white on a color CRT, all three guns must output differing amounts of energy. The gun that outputs the most energy is the red gun since the red phosphor emits the least amount of light.
Gamma
CRTs have a pronounced triode characteristic, which results in significant gamma (a nonlinear relationship in an electron gun between applied video voltage and beam intensity).
Deflection
There are two types of deflection: magnetic and electrostatic. Magnetic is usually used in TVs and monitors as it allows for higher deflection angles (and hence shallower CRTs) and deflection power (which allows for higher electron beam current and hence brighter images) while avoiding the need for high voltages for deflection of up to 2 kV, while oscilloscopes often use electrostatic deflection since the raw waveforms captured by the oscilloscope can be applied directly (after amplification) to the vertical electrostatic deflection plates inside the CRT.
Magnetic deflection
Those that use magnetic deflection may use a yoke that has two pairs of deflection coils; one pair for vertical, and another for horizontal deflection. The yoke can be bonded (be integral) or removable. Those that were bonded used glue or a plastic to bond the yoke to the area between the neck and the funnel of the CRT while those with removable yokes are clamped. The yoke generates heat whose removal is essential since the conductivity of glass goes up with increasing temperature, the glass needs to be insulating for the CRT to remain usable as a capacitor. The temperature of the glass below the yoke is thus checked during the design of a new yoke. The yoke contains the deflection and convergence coils with a ferrite core to reduce loss of magnetic force as well as the magnetized rings used to align or adjust the electron beams in color CRTs (The color purity and convergence rings, for example) and monochrome CRTs. The yoke may be connected using a connector, the order in which the deflection coils of the yoke are connected determines the orientation of the image displayed by the CRT. The deflection coils may be held in place using polyurethane glue.
The deflection coils are driven by sawtooth signals that may be delivered through VGA as horizontal and vertical sync signals. A CRT needs two deflection circuits: a horizontal and a vertical circuit, which are similar except that the horizontal circuit runs at a much higher frequency (a Horizontal scan rate) of 15–240 kHz depending on the refresh rate of the CRT and the number of horizontal lines to be drawn (the vertical resolution of the CRT). The higher frequency makes it more susceptible to interference, so an automatic frequency control (AFC) circuit may be used to lock the phase of the horizontal deflection signal to that of a sync signal, to prevent the image from becoming distorted diagonally. The vertical frequency varies according to the refresh rate of the CRT. So a CRT with a 60 Hz refresh rate has a vertical deflection circuit running at 60 Hz. The horizontal and vertical deflection signals may be generated using two circuits that work differently; the horizontal deflection signal may be generated using a voltage controlled oscillator (VCO) while the vertical signal may be generated using a triggered relaxation oscillator. In many TVs, the frequencies at which the deflection coils run is in part determined by the inductance value of the coils. CRTs had differing deflection angles; the higher the deflection angle, the shallower the CRT for a given screen size, but at the cost of more deflection power and lower optical performance.
Higher deflection power means more current is sent to the deflection coils to bend the electron beam at a higher angle, which in turn may generate more heat or require electronics that can handle the increased power. Heat is generated due to resistive and core losses. The deflection power is measured in mA per inch. The vertical deflection coils may require ~24 volts while the horizontal deflection coils require ~120 volts to operate.
The deflection coils are driven by deflection amplifiers. The horizontal deflection coils may also be driven in part by the horizontal output stage of a TV set. The stage contains a capacitor that is in series with the horizontal deflection coils that performs several functions, among them are: shaping the sawtooth deflection signal to match the curvature of the CRT and centering the image by preventing a DC bias from developing on the coil. At the beginning of retrace, the magnetic field of the coil collapses, causing the electron beam to return to the center of the screen, while at the same time the coil returns energy into capacitors, the energy of which is then used to force the electron beam to go to the left of the screen.
Due to the high frequency at which the horizontal deflection coils operate, the energy in the deflection coils must be recycled to reduce heat dissipation. Recycling is done by transferring the energy in the deflection coils' magnetic field to a set of capacitors. The voltage on the horizontal deflection coils is negative when the electron beam is on the left side of the screen and positive when the electron beam is on the right side of the screen. The energy required for deflection is dependent on the energy of the electrons. Higher energy (voltage and/or current) electron beams need more energy to be deflected, and are used to achieve higher image brightness.
Electrostatic deflection
Mostly used in oscilloscopes. Deflection is carried out by applying a voltage across two pairs of plates, one for horizontal, and the other for vertical deflection. The electron beam is steered by varying the voltage difference across plates in a pair; For example, applying a voltage to the upper plate of the vertical deflection pair, while keeping the voltage in the bottom plate at 0 volts, will cause the electron beam to be deflected towards the upper part of the screen; increasing the voltage in the upper plate while keeping the bottom plate at 0 will cause the electron beam to be deflected to a higher point in the screen (will cause the beam to be deflected at a higher deflection angle). The same applies with the horizontal deflection plates. Increasing the length and proximity between plates in a pair can also increase the deflection angle.
Burn-in
Burn-in is when images are physically "burned" into the screen of the CRT; this occurs due to degradation of the phosphors due to prolonged electron bombardment of the phosphors, and happens when a fixed image or logo is left for too long on the screen, causing it to appear as a "ghost" image or, in severe cases, also when the CRT is off. To counter this, screensavers were used in computers to minimize burn-in. Burn-in is not exclusive to CRTs, as it also happens to plasma displays and OLED displays.
Evacuation
The CRT's partial vacuum of to or less is evacuated or exhausted in a ~375–475 °C oven in a process called baking or bake-out. The evacuation process also outgasses any materials inside the CRT, while decomposing others such as the polyvinyl alcohol used to apply the phosphors. The heating and cooling are done gradually to avoid inducing stress, stiffening and possibly cracking the glass; the oven heats the gases inside the CRT, increasing the speed of the gas molecules which increases the chances of them getting drawn out by the vacuum pump. The temperature of the CRT is kept to below that of the oven, and the oven starts to cool just after the CRT reaches 400 °C, or, the CRT was kept at a temperature higher than 400 °C for up to 15–55 minutes. The CRT was heated during or after evacuation, and the heat may have been used simultaneously to melt the frit in the CRT, joining the screen and funnel. The pump used is a turbomolecular pump or a diffusion pump. Formerly mercury vacuum pumps were also used. After baking, the CRT is disconnected ("sealed or tipped off") from the vacuum pump. The getter is then fired using an RF (induction) coil. The getter is usually in the funnel or in the neck of the CRT. The getter material which is often barium-based, catches any remaining gas particles as it evaporates due to heating induced by the RF coil (that may be combined with exothermic heating within the material); the vapor fills the CRT, trapping any gas molecules that it encounters and condenses on the inside of the CRT forming a layer that contains trapped gas molecules. Hydrogen may be present in the material to help distribute the barium vapor. The material is heated to temperatures above 1000 °C, causing it to evaporate. Partial loss of vacuum in a CRT can result in a hazy image, blue glowing in the neck of the CRT, flashovers, loss of cathode emission or focusing problems.
Rebuilding
CRTs used to be rebuilt; repaired or refurbished. The rebuilding process included the disassembly of the CRT, the disassembly and repair or replacement of the electron gun(s), the removal and redeposition of phosphors and aquadag, etc. Rebuilding was popular until the 1960s because CRTs were expensive and wore out quickly, making repair worth it. The last CRT rebuilder in the US closed in 2010, and the last in Europe, RACS, which was located in France, closed in 2013.
Reactivation
Also known as rejuvenation, the goal is to temporarily restore the brightness of a worn CRT. This is often done by carefully increasing the voltage on the cathode heater and the current and voltage on the control grids of the electron gun manually. Some rejuvenators can also fix heater-to-cathode shorts by running a capacitive discharge through the short.
Phosphors
Phosphors in CRTs emit secondary electrons due to them being inside the vacuum of the CRT. The secondary electrons are collected by the anode of the CRT. Secondary electrons generated by phosphors need to be collected to prevent charges from developing in the screen, which would lead to reduced image brightness since the charge would repel the electron beam.
The phosphors used in CRTs often contain rare earth metals, replacing earlier dimmer phosphors. Early red and green phosphors contained Cadmium, and some black and white CRT phosphors also contained beryllium in the form of Zinc beryllium silicate, although white phosphors containing cadmium, zinc and magnesium with silver, copper or manganese as dopants were also used. The rare earth phosphors used in CRTs are more efficient (produce more light) than earlier phosphors. The phosphors adhere to the screen because of Van der Waals and electrostatic forces. Phosphors composed of smaller particles adhere more strongly to the screen. The phosphors together with the carbon used to prevent light bleeding (in color CRTs) can be easily removed by scratching.
Several dozen types of phosphors were available for CRTs. Phosphors were classified according to color, persistence, luminance rise and fall curves, color depending on anode voltage (for phosphors used in penetration CRTs), Intended use, chemical composition, safety, sensitivity to burn-in, and secondary emission properties. Examples of rare earth phosphors are yttrium oxide for red and yttrium silicide for blue in beam index tubes, while examples of earlier phosphors are copper cadmium sulfide for red,
SMPTE-C phosphors have properties defined by the SMPTE-C standard, which defines a color space of the same name. The standard prioritizes accurate color reproduction, which was made difficult by the different phosphors and color spaces used in the NTSC and PAL color systems. PAL TV sets have subjectively better color reproduction due to the use of saturated green phosphors, which have relatively long decay times that are tolerated in PAL since there is more time in PAL for phosphors to decay, due to its lower framerate. SMPTE-C phosphors were used in professional video monitors.
The phosphor coating on monochrome and color CRTs may have an aluminum coating on its rear side used to reflect light forward, provide protection against ions to prevent ion burn by negative ions on the phosphor, manage heat generated by electrons colliding against the phosphor, prevent static build up that could repel electrons from the screen, form part of the anode and collect the secondary electrons generated by the phosphors in the screen after being hit by the electron beam, providing the electrons with a return path. The electron beam passes through the aluminum coating before hitting the phosphors on the screen; the aluminum attenuates the electron beam voltage by about 1 kV. A film or lacquer may be applied to the phosphors to reduce the surface roughness of the surface formed by the phosphors to allow the aluminum coating to have a uniform surface and prevent it from touching the glass of the screen. This is known as filming. The lacquer contains solvents that are later evaporated; the lacquer may be chemically roughened to cause an aluminum coating with holes to be created to allow the solvents to escape.
Phosphor persistence
Various phosphors are available depending upon the needs of the measurement or display application. The brightness, color, and persistence of the illumination depends upon the type of phosphor used on the CRT screen. Phosphors are available with persistences ranging from less than one microsecond to several seconds. For visual observation of brief transient events, a long persistence phosphor may be desirable. For events which are fast and repetitive, or high frequency, a short-persistence phosphor is generally preferable. The phosphor persistence must be low enough to avoid smearing or ghosting artifacts at high refresh rates.
Limitations and workarounds
Blooming
Variations in anode voltage can lead to variations in brightness in parts or all of the image, in addition to blooming, shrinkage or the image getting zoomed in or out. Lower voltages lead to blooming and zooming in, while higher voltages do the opposite. Some blooming is unavoidable, which can be seen as bright areas of an image that expand, distorting or pushing aside surrounding darker areas of the same image. Blooming occurs because bright areas have a higher electron beam current from the electron gun, making the beam wider and harder to focus. Poor voltage regulation causes focus and anode voltage to go down with increasing electron beam current.
Doming
Doming is a phenomenon found on some CRT TVs in which parts of the shadow mask become heated. In TVs that exhibit this behavior, it tends to occur in high-contrast scenes in which there is a largely dark scene with one or more localized bright spots. As the electron beam hits the shadow mask in these areas it heats unevenly. The shadow mask warps due to the heat differences, which causes the electron gun to hit the wrong colored phosphors and incorrect colors to be displayed in the affected area. Thermal expansion causes the shadow mask to expand by around 100 microns.
During normal operation, the shadow mask is heated to around 80–90 °C. Bright areas of images heat the shadow mask more than dark areas, leading to uneven heating of the shadow mask and warping (blooming) due to thermal expansion caused by heating by increased electron beam current. The shadow mask is usually made of steel but it can be made of Invar (a low-thermal expansion Nickel-Iron alloy) as it withstands two to three times more current than conventional masks without noticeable warping, while making higher resolution CRTs easier to achieve. Coatings that dissipate heat may be applied on the shadow mask to limit blooming in a process called blackening.
Bimetal springs may be used in CRTs used in TVs to compensate for warping that occurs as the electron beam heats the shadow mask, causing thermal expansion. The shadow mask is installed to the screen using metal pieces or a rail or frame that is fused to the funnel or the screen glass respectively, holding the shadow mask in tension to minimize warping (if the mask is flat, used in flat-screen CRT computer monitors) and allowing for higher image brightness and contrast.
Aperture grille screens are brighter since they allow more electrons through, but they require support wires. They are also more resistant to warping. Color CRTs need higher anode voltages than monochrome CRTs to achieve the same brightness since the shadow mask blocks most of the electron beam. Slot masks and specially Aperture grilles do not block as many electrons resulting in a brighter image for a given anode voltage, but aperture grille CRTs are heavier. Shadow masks block 80–85% of the electron beam while Aperture grilles allow more electrons to pass through.
High voltage
Image brightness is related to the anode voltage and to the CRTs size, so higher voltages are needed for both larger screens and higher image brightness. Image brightness is also controlled by the current of the electron beam. Higher anode voltages and electron beam currents also mean higher amounts of x-rays and heat generation since the electrons have a higher speed and energy. Leaded glass and special barium-strontium glass are used to block most x-ray emissions.
Size
A practical limit on the size of a CRT is the weight of the thick glass needed to safely sustain its vacuum, since a CRT's exterior is exposed to the full atmospheric pressure, which for instance totals on a 27-inch (400 in2) screen. For example, the large 43-inch Sony PVM-4300 weighs , much heavier than 32-inch CRTs (up to ) and 19-inch CRTs (up to ). Much lighter flat panel TVs are only ~ for 32-inch and for 19-inch.
Size is also limited by anode voltage, as it would require a higher dielectric strength to prevent arcing and the electrical losses and ozone generation it causes, without sacrificing image brightness.
Shadow masks also become more difficult to make with increasing resolution and size.
Limits imposed by deflection
At high deflection angles, resolutions and refresh rates (since higher resolutions and refresh rates require significantly higher frequencies to be applied to the horizontal deflection coils), the deflection yoke starts to produce large amounts of heat, due to the need to move the electron beam at a higher angle, which in turn requires exponentially larger amounts of power. As an example, to increase the deflection angle from 90 to 120°, power consumption of the yoke must also go up from 40 watts to 80 watts, and to increase it further from 120 to 150°, deflection power must again go up from 80 to 160 watts. This normally makes CRTs that go beyond certain deflection angles, resolutions and refresh rates impractical, since the coils would generate too much heat due to resistance caused by the skin effect, surface and eddy current losses, and/or possibly causing the glass underneath the coil to become conductive (as the electrical conductivity of glass increases with increasing temperature). Some deflection yokes are designed to dissipate the heat that comes from their operation. Higher deflection angles in color CRTs directly affect convergence at the corners of the screen which requires additional compensation circuitry to handle electron beam power and shape, leading to higher costs and power consumption. Higher deflection angles allow a CRT of a given size to be slimmer, however they also impose more stress on the CRT envelope, specially on the panel, the seal between the panel and funnel and on the funnel. The funnel needs to be long enough to minimize stress, as a longer funnel can be better shaped to have lower stress.
Comparison with other technologies
LCD advantages over CRT: Lower bulk, power consumption and heat generation, higher refresh rates (up to 360 Hz)
CRT advantages over LCD: Better color reproduction, no motion blur, multisyncing available in many monitors, no input lag
OLED advantages over CRT: Lower bulk, similar color reproduction, higher contrast ratios, similar refresh rates (over 60 Hz, up to 120 Hz) except for computer monitors.
On CRTs, refresh rate depends on resolution, both of which are ultimately limited by the maximum horizontal scanning frequency of the CRT. Motion blur also depends on the decay time of the phosphors. Phosphors that decay too slowly for a given refresh rate may cause smearing or motion blur on the image. In practice, CRTs are limited to a refresh rate of 160 Hz. LCDs that can compete with OLED (Dual Layer, and mini-LED LCDs) are not available in high refresh rates, although quantum dot LCDs (QLEDs) are available in high refresh rates (up to 144 Hz) and are competitive in color reproduction with OLEDs.
CRT monitors can still outperform LCD and OLED monitors in input lag, as there is no signal processing between the CRT and the display connector of the monitor, since CRT monitors often use VGA which provides an analog signal that can be fed to a CRT directly. Video cards designed for use with CRTs may have a RAMDAC to generate the analog signals needed by the CRT. Also, CRT monitors are often capable of displaying sharp images at several resolutions, an ability known as multisyncing. Due to these reasons, CRTs are often preferred for playing video games made in the early 2000s and prior in spite of their bulk, weight and heat generation, with some pieces of technology requiring a CRT to function due to not being built with the functionality of modern displays in mind.
CRTs tend to be more durable than their flat panel counterparts, though specialised LCDs that have similar durability also exist.
Types
CRTs were produced in two major categories, picture tubes and display tubes. Picture tubes were used in TVs while display tubes were used in computer monitors. Display tubes were of higher resolution and when used in computer monitors sometimes had adjustable overscan, or sometimes underscan.
Picture tube CRTs have overscan, meaning the actual edges of the image are not shown; this is deliberate to allow for adjustment variations between CRT TVs, preventing the ragged edges (due to blooming) of the image from being shown on screen. The shadow mask may have grooves that reflect away the electrons that do not hit the screen due to overscan. Color picture tubes used in TVs were also known as CPTs. CRTs are also sometimes called Braun tubes.
Monochrome CRTs
If the CRT is a black and white (B&W or monochrome) CRT, there is a single electron gun in the neck and the funnel is coated on the inside with aluminum that has been applied by evaporation; the aluminum is evaporated in a vacuum and allowed to condense on the inside of the CRT. Aluminum eliminates the need for ion traps, necessary to prevent ion burn on the phosphor, while also reflecting light generated by the phosphor towards the screen, managing heat and absorbing electrons providing a return path for them; previously funnels were coated on the inside with aquadag, used because it can be applied like paint; the phosphors were left uncoated. Aluminum started being applied to CRTs in the 1950s, coating the inside of the CRT including the phosphors, which also increased image brightness since the aluminum reflected light (that would otherwise be lost inside the CRT) towards the outside of the CRT. In aluminized monochrome CRTs, Aquadag is used on the outside. There is a single aluminum coating covering the funnel and the screen.
The screen, funnel and neck are fused together into a single envelope, possibly using lead enamel seals, a hole is made in the funnel onto which the anode cap is installed and the phosphor, aquadag and aluminum are applied afterwards. Previously monochrome CRTs used ion traps that required magnets; the magnet was used to deflect the electrons away from the more difficult to deflect ions, letting the electrons through while letting the ions collide into a sheet of metal inside the electron gun. Ion burn results in premature wear of the phosphor. Since ions are harder to deflect than electrons, ion burn leaves a black dot in the center of the screen.
The interior aquadag or aluminum coating was the anode and served to accelerate the electrons towards the screen, collect them after hitting the screen while serving as a capacitor together with the outer aquadag coating. The screen has a single uniform phosphor coating and no shadow mask, technically having no resolution limit.
Monochrome CRTs may use ring magnets to adjust the centering of the electron beam and magnets around the deflection yoke to adjust the geometry of the image.
When a monochrome CRT is shut off, the screen itself retracts to a small, white dot in the center, along with the phosphors shutting down, shot by the electron gun; it sometimes takes a while for it to go away.
Color CRTs
Color CRTs use three different phosphors which emit red, green, and blue light respectively. They are packed together in stripes (as in aperture grille designs) or clusters called "triads" (as in shadow mask CRTs).
Color CRTs have three electron guns, one for each primary color, (red, green and blue) arranged either in a straight line (in-line) or in an equilateral triangular configuration (the guns are usually constructed as a single unit). The triangular configuration is often called delta-gun, based on its relation to the shape of the Greek letter delta (Δ). The arrangement of the phosphors is the same as that of the electron guns. A grille or mask absorbs the electrons that would otherwise hit the wrong phosphor.
A shadow mask tube uses a metal plate with tiny holes, typically in a delta configuration, placed so that the electron beam only illuminates the correct phosphors on the face of the tube; blocking all other electrons. Shadow masks that use slots instead of holes are known as slot masks. The holes or slots are tapered so that the electrons that strike the inside of any hole will be reflected back, if they are not absorbed (e.g. due to local charge accumulation), instead of bouncing through the hole to strike a random (wrong) spot on the screen. Another type of color CRT (Trinitron) uses an aperture grille of tensioned vertical wires to achieve the same result. The shadow mask has a single hole for each triad. The shadow mask is usually inch behind the screen.
Trinitron CRTs were different from other color CRTs in that they had a single electron gun with three cathodes, an aperture grille which lets more electrons through, increasing image brightness (since the aperture grille does not block as many electrons), and a vertically cylindrical screen, rather than a curved screen.
The three electron guns are in the neck (except for Trinitrons) and the red, green and blue phosphors on the screen may be separated by a black grid or matrix (called black stripe by Toshiba).
The funnel is coated with aquadag on both sides while the screen has a separate aluminum coating applied in a vacuum, deposited after the phosphor coating is applied, facing the electron gun. The aluminum coating protects the phosphor from ions, absorbs secondary electrons, providing them with a return path, preventing them from electrostatically charging the screen which would then repel electrons and reduce image brightness, reflects the light from the phosphors forwards and helps manage heat. It also serves as the anode of the CRT together with the inner aquadag coating. The inner coating is electrically connected to an electrode of the electron gun using springs, forming the final anode. The outer aquadag coating is connected to ground, possibly using a series of springs or a harness that makes contact with the aquadag.
Shadow mask
The shadow mask absorbs or reflects electrons that would otherwise strike the wrong phosphor dots, causing color purity issues (discoloration of images); in other words, when set up correctly, the shadow mask helps ensure color purity. When the electrons strike the shadow mask, they release their energy as heat and x-rays. If the electrons have too much energy due to an anode voltage that is too high for example, the shadow mask can warp due to the heat, which can also happen during the Lehr baking at ~435 °C of the frit seal between the faceplate and the funnel of the CRT.
Shadow masks were replaced in TVs by slot masks in the 1970s, since slot masks let more electrons through, increasing image brightness. Shadow masks may be connected electrically to the anode of the CRT. Trinitron used a single electron gun with three cathodes instead of three complete guns. CRT PC monitors usually use shadow masks, except for Sony's Trinitron, Mitsubishi's Diamondtron and NEC's Cromaclear; Trinitron and Diamondtron use aperture grilles while Cromaclear uses a slot mask. Some shadow mask CRTs have color phosphors that are smaller in diameter than the electron beams used to light them, with the intention being to cover the entire phosphor, increasing image brightness. Shadow masks may be pressed into a curved shape.
Screen manufacture
Early color CRTs did not have a black matrix, which was introduced by Zenith in 1969, and Panasonic in 1970. The black matrix eliminates light leaking from one phosphor to another since the black matrix isolates the phosphor dots from one another, so part of the electron beam touches the black matrix. This is also made necessary by warping of the shadow mask. Light bleeding may still occur due to stray electrons striking the wrong phosphor dots. At high resolutions and refresh rates, phosphors only receive a very small amount of energy, limiting image brightness.
Several methods were used to create the black matrix. One method coated the screen in photoresist such as dichromate-sensitized polyvinyl alcohol photoresist which was then dried and exposed; the unexposed areas were removed and the entire screen was coated in colloidal graphite to create a carbon film, and then hydrogen peroxide was used to remove the remaining photoresist alongside the carbon that was on top of it, creating holes that in turn created the black matrix. The photoresist had to be of the correct thickness to ensure sufficient adhesion to the screen, while the exposure step had to be controlled to avoid holes that were too small or large with ragged edges caused by light diffraction, ultimately limiting the maximum resolution of large color CRTs. The holes were then filled with phosphor using the method described above. Another method used phosphors suspended in an aromatic diazonium salt that adhered to the screen when exposed to light; the phosphors were applied, then exposed to cause them to adhere to the screen, repeating the process once for each color. Then carbon was applied to the remaining areas of the screen while exposing the entire screen to light to create the black matrix, and a fixing process using an aqueous polymer solution was applied to the screen to make the phosphors and black matrix resistant to water. Black chromium may be used instead of carbon in the black matrix. Other methods were also used.
The phosphors are applied using photolithography. The inner side of the screen is coated with phosphor particles suspended in PVA photoresist slurry, which is then dried using infrared light, exposed, and developed. The exposure is done using a "lighthouse" that uses an ultraviolet light source with a corrector lens to allow the CRT to achieve color purity. Removable shadow masks with spring-loaded clips are used as photomasks. The process is repeated with all colors. Usually the green phosphor is the first to be applied. After phosphor application, the screen is baked to eliminate any organic chemicals (such as the PVA that was used to deposit the phosphor) that may remain on the screen. Alternatively, the phosphors may be applied in a vacuum chamber by evaporating them and allowing them to condense on the screen, creating a very uniform coating. Early color CRTs had their phosphors deposited using silkscreen printing. Phosphors may have color filters over them (facing the viewer), contain pigment of the color emitted by the phosphor, or be encapsulated in color filters to improve color purity and reproduction while reducing glare. Such technology was sold by Toshiba under the Microfilter brand name. Poor exposure due to insufficient light leads to poor phosphor adhesion to the screen, which limits the maximum resolution of a CRT, as the smaller phosphor dots required for higher resolutions cannot receive as much light due to their smaller size.
After the screen is coated with phosphor and aluminum and the shadow mask installed onto it the screen is bonded to the funnel using a glass frit that may contain 65–88% of lead oxide by weight. The lead oxide is necessary for the glass frit to have a low melting temperature. Boron oxide (III) may also present to stabilize the frit, with alumina powder as filler powder to control the thermal expansion of the frit. The frit may be applied as a paste consisting of frit particles suspended in amyl acetate or in a polymer with an alkyl methacrylate monomer together with an organic solvent to dissolve the polymer and monomer. The CRT is then baked in an oven in what is called a Lehr bake, to cure the frit, sealing the funnel and screen together. The frit contains a large quantity of lead, causing color CRTs to contain more lead than their monochrome counterparts. Monochrome CRTs on the other hand do not require frit; the funnel can be fused directly to the glass by melting and joining the edges of the funnel and screen using gas flames. Frit is used in color CRTs to prevent deformation of the shadow mask and screen during the fusing process. The edges of the screen and the edges of funnel of the CRT that mate with the screen, are never melted. A primer may be applied on the edges of the funnel and screen before the frit paste is applied to improve adhesion. The Lehr bake consists of several successive steps that heat and then cool the CRT gradually until it reaches a temperature of 435–475 °C (other sources may state different temperatures, such as 440 °C) After the Lehr bake, the CRT is flushed with air or nitrogen to remove contaminants, the electron gun is inserted and sealed into the neck of the CRT, and a vacuum is formed on the CRT.
Convergence and purity in color CRTs
Due to limitations in the dimensional precision with which CRTs can be manufactured economically, it has not been practically possible to build color CRTs in which three electron beams could be aligned to hit phosphors of respective color in acceptable coordination, solely on the basis of the geometric configuration of the electron gun axes and gun aperture positions, shadow mask apertures, etc. The shadow mask ensures that one beam will only hit spots of certain colors of phosphors, but minute variations in physical alignment of the internal parts among individual CRTs will cause variations in the exact alignment of the beams through the shadow mask, allowing some electrons from, for example, the red beam to hit, say, blue phosphors, unless some individual compensation is made for the variance among individual tubes.
Color convergence and color purity are two aspects of this single problem. Firstly, for correct color rendering it is necessary that regardless of where the beams are deflected on the screen, all three hit the same spot (and nominally pass through the same hole or slot) on the shadow mask. This is called convergence. More specifically, the convergence at the center of the screen (with no deflection field applied by the yoke) is called static convergence, and the convergence over the rest of the screen area (specially at the edges and corners) is called dynamic convergence. The beams may converge at the center of the screen and yet stray from each other as they are deflected toward the edges; such a CRT would be said to have good static convergence but poor dynamic convergence. Secondly, each beam must only strike the phosphors of the color it is intended to strike and no others. This is called purity. Like convergence, there is static purity and dynamic purity, with the same meanings of "static" and "dynamic" as for convergence. Convergence and purity are distinct parameters; a CRT could have good purity but poor convergence, or vice versa. Poor convergence causes color "shadows" or "ghosts" along displayed edges and contours, as if the image on the screen were intaglio printed with poor registration. Poor purity causes objects on the screen to appear off-color while their edges remain sharp. Purity and convergence problems can occur at the same time, in the same or different areas of the screen or both over the whole screen, and either uniformly or to greater or lesser degrees over different parts of the screen.
The solution to the static convergence and purity problems is a set of color alignment ring magnets installed around the neck of the CRT. These movable weak permanent magnets are usually mounted on the back end of the deflection yoke assembly and are set at the factory to compensate for any static purity and convergence errors that are intrinsic to the unadjusted tube. Typically there are two or three pairs of two magnets in the form of rings made of plastic impregnated with a magnetic material, with their magnetic fields parallel to the planes of the magnets, which are perpendicular to the electron gun axes. Often, one pair of rings has 2 poles, another has 4, and the remaining ring has 6 poles. Each pair of magnetic rings forms a single effective magnet whose field vector can be fully and freely adjusted (in both direction and magnitude). By rotating a pair of magnets relative to each other, their relative field alignment can be varied, adjusting the effective field strength of the pair. (As they rotate relative to each other, each magnet's field can be considered to have two opposing components at right angles, and these four components [two each for two magnets] form two pairs, one pair reinforcing each other and the other pair opposing and canceling each other. Rotating away from alignment, the magnets' mutually reinforcing field components decrease as they are traded for increasing opposed, mutually cancelling components.) By rotating a pair of magnets together, preserving the relative angle between them, the direction of their collective magnetic field can be varied. Overall, adjusting all of the convergence/purity magnets allows a finely tuned slight electron beam deflection or lateral offset to be applied, which compensates for minor static convergence and purity errors intrinsic to the uncalibrated tube. Once set, these magnets are usually glued in place, but normally they can be freed and readjusted in the field (e.g. by a TV repair shop) if necessary.
On some CRTs, additional fixed adjustable magnets are added for dynamic convergence or dynamic purity at specific points on the screen, typically near the corners or edges. Further adjustment of dynamic convergence and purity typically cannot be done passively, but requires active compensation circuits, one to correct convergence horizontally and another to correct it vertically. In this case the deflection yoke contains convergence coils, a set of two per color, wound on the same core, to which the convergence signals are applied. That means 6 convergence coils in groups of 3, with 2 coils per group, with one coil for horizontal convergence correction and another for vertical convergence correction, with each group sharing a core. The groups are separated 120° from one another. Dynamic convergence is necessary because the front of the CRT and the shadow mask are not spherical, compensating for electron beam defocusing and astigmatism. The fact that the CRT screen is not spherical leads to geometry problems which may be corrected using a circuit. The signals used for convergence are parabolic waveforms derived from three signals coming from a vertical output circuit. The parabolic signal is fed into the convergence coils, while the other two are sawtooth signals that, when mixed with the parabolic signals, create the necessary signal for convergence. A resistor and diode are used to lock the convergence signal to the center of the screen to prevent it from being affected by the static convergence. The horizontal and vertical convergence circuits are similar. Each circuit has two resonators, one usually tuned to 15,625 Hz and the other to 31,250 Hz, which set the frequency of the signal sent to the convergence coils. Dynamic convergence may be accomplished using electrostatic quadrupole fields in the electron gun. Dynamic convergence means that the electron beam does not travel in a perfectly straight line between the deflection coils and the screen, since the convergence coils cause it to become curved to conform to the screen.
The convergence signal may instead be a sawtooth signal with a slight sine wave appearance, the sine wave part is created using a capacitor in series with each deflection coil. In this case, the convergence signal is used to drive the deflection coils. The sine wave part of the signal causes the electron beam to move more slowly near the edges of the screen. The capacitors used to create the convergence signal are known as the s-capacitors. This type of convergence is necessary due to the high deflection angles and flat screens of many CRT computer monitors. The value of the s-capacitors must be chosen based on the scan rate of the CRT, so multi-syncing monitors must have different sets of s-capacitors, one for each refresh rate.
Dynamic convergence may instead be accomplished in some CRTs using only the ring magnets, magnets glued to the CRT, and by varying the position of the deflection yoke, whose position may be maintained using set screws, a clamp and rubber wedges. 90° deflection angle CRTs may use "self-convergence" without dynamic convergence, which together with the in-line triad arrangement, eliminates the need for separate convergence coils and related circuitry, reducing costs. complexity and CRT depth by 10 millimeters. Self-convergence works by means of "nonuniform" magnetic fields. Dynamic convergence is necessary in 110° deflection angle CRTs, and quadrupole windings on the deflection yoke at a certain frequency may also be used for dynamic convergence.
Dynamic color convergence and purity are one of the main reasons why until late in their history, CRTs were long-necked (deep) and had biaxially curved faces; these geometric design characteristics are necessary for intrinsic passive dynamic color convergence and purity. Only starting around the 1990s did sophisticated active dynamic convergence compensation circuits become available that made short-necked and flat-faced CRTs workable. These active compensation circuits use the deflection yoke to finely adjust beam deflection according to the beam target location. The same techniques (and major circuit components) also make possible the adjustment of display image rotation, skew, and other complex raster geometry parameters through electronics under user control.
Alternatively, the guns can be aligned with one another (converged) using convergence rings placed right outside the neck; with one ring per gun. The rings can have north and south poles. There can be 4 sets of rings, one to adjust RGB convergence, a second to adjust Red and Blue convergence, a third to adjust vertical raster shift, and a fourth to adjust purity. The vertical raster shift adjusts the straightness of the scan line. CRTs may also employ dynamic convergence circuits, which ensure correct convergence at the edges of the CRT. Permalloy magnets may also be used to correct the convergence at the edges. Convergence is carried out with the help of a crosshatch (grid) pattern. Other CRTs may instead use magnets that are pushed in and out instead of rings. In early color CRTs, the holes in the shadow mask became progressively smaller as they extended outwards from the center of the screen, to aid in convergence.
Magnetic shielding and degaussing
If the shadow mask or aperture grille becomes magnetized, its magnetic field alters the paths of the electron beams. This causes errors of "color purity" as the electrons no longer follow only their intended paths, and some will hit some phosphors of colors other than the one intended. For example, some electrons from the red beam may hit blue or green phosphors, imposing a magenta or yellow tint to parts of the image that are supposed to be pure red. (This effect is localized to a specific area of the screen if the magnetization is localized.) Therefore, it is important that the shadow mask or aperture grille not be magnetized. The earth's magnetic field may have an effect on the color purity of the CRT. Because of this, some CRTs have external magnetic shields over their funnels. The magnetic shield may be made of soft iron or mild steel and contain a degaussing coil. The magnetic shield and shadow mask may be permanently magnetized by the earth's magnetic field, adversely affecting color purity when the CRT is moved. This problem is solved with a built-in degaussing coil, found in many TVs and computer monitors. Degaussing may be automatic, occurring whenever the CRT is turned on. The magnetic shield may also be internal, being on the inside of the funnel of the CRT.
Color CRT displays in TV sets and computer monitors often have a built-in degaussing (demagnetizing) coil mounted around the perimeter of the CRT face. Upon power-up of the CRT display, the degaussing circuit produces a brief, alternating current through the coil which fades to zero over a few seconds, producing a decaying alternating magnetic field from the coil. This degaussing field is strong enough to remove shadow mask magnetization in most cases, maintaining color purity. In unusual cases of strong magnetization where the internal degaussing field is not sufficient, the shadow mask may be degaussed externally with a stronger portable degausser or demagnetizer. However, an excessively strong magnetic field, whether alternating or constant, may mechanically deform (bend) the shadow mask, causing a permanent color distortion on the display which looks very similar to a magnetization effect.
Resolution
Dot pitch defines the maximum resolution of the display, assuming delta-gun CRTs. In these, as the scanned resolution approaches the dot pitch resolution, moiré appears, as the detail being displayed is finer than what the shadow mask can render. Aperture grille monitors do not suffer from vertical moiré, however, because their phosphor stripes have no vertical detail. In smaller CRTs, these strips maintain position by themselves, but larger aperture-grille CRTs require one or two crosswise (horizontal) support strips; one for smaller CRTs, and two for larger ones. The support wires block electrons, causing the wires to be visible. In aperture grille CRTs, dot pitch is replaced by stripe pitch. Hitachi developed the Enhanced Dot Pitch (EDP) shadow mask, which uses oval holes instead of circular ones, with respective oval phosphor dots. Moiré is reduced in shadow mask CRTs by arranging the holes in the shadow mask in a honeycomb-like pattern.
Projection CRTs
Projection CRTs were used in CRT projectors and CRT rear-projection TVs, and are usually small (being 7–9 inches across); have a phosphor that generates either red, green or blue light, thus making them monochrome CRTs; and are similar in construction to other monochrome CRTs. Larger projection CRTs in general lasted longer, and were able to provide higher brightness levels and resolution, but were also more expensive. Projection CRTs have an unusually high anode voltage for their size (such as 27 or 25 kV for a 5 or 7-inch projection CRT respectively), and a specially made tungsten/barium cathode (instead of the pure barium oxide normally used) that consists of barium atoms embedded in 20% porous tungsten or barium and calcium aluminates or of barium, calcium and aluminum oxides coated on porous tungsten; the barium diffuses through the tungsten to emit electrons. The special cathode can deliver 2 mA of current instead of the 0.3mA of normal cathodes, which makes them bright enough to be used as light sources for projection. The high anode voltage and the specially made cathode increase the voltage and current, respectively, of the electron beam, which increases the light emitted by the phosphors, and also the amount of heat generated during operation; this means that projector CRTs need cooling. The screen is usually cooled using a container (the screen forms part of the container) with glycol; the glycol may itself be dyed, or colorless glycol may be used inside a container which may be colored (forming a lens known as a c-element). Colored lenses or glycol are used for improving color reproduction at the cost of brightness, and are only used on red and green CRTs. Each CRT has its own glycol, which has access to an air bubble to allow the glycol to shrink and expand as it cools and warms. Projector CRTs may have adjustment rings just like color CRTs to adjust astigmatism, which is flaring of the electron beam (stray light similar to shadows). They have three adjustment rings; one with two poles, one with four poles, and another with 6 poles. When correctly adjusted, the projector can display perfectly round dots without flaring. The screens used in projection CRTs were more transparent than usual, with 90% transmittance. The first projection CRTs were made in 1933.
Projector CRTs were available with electrostatic and electromagnetic focusing, the latter being more expensive. Electrostatic focusing used electronics to focus the electron beam, together with focusing magnets around the neck of the CRT for fine focusing adjustments. This type of focusing degraded over time. Electromagnetic focusing was introduced in the early 1990s and included an electromagnetic focusing coil in addition to the already existing focusing magnets. Electromagnetic focusing was much more stable over the lifetime of the CRT, retaining 95% of its sharpness by the end of life of the CRT.
Beam-index tube
Beam-index tubes, also known as Uniray, Apple CRT or Indextron, was an attempt in the 1950s by Philco to create a color CRT without a shadow mask, eliminating convergence and purity problems, and allowing for shallower CRTs with higher deflection angles. It also required a lower voltage power supply for the final anode since it did not use a shadow mask, which normally blocks around 80% of the electrons generated by the electron gun. The lack of a shadow mask also made it immune to the earth's magnetic field while also making degaussing unnecessary and increasing image brightness. It was constructed similarly to a monochrome CRT, with an aquadag outer coating, an aluminum inner coating, and a single electron gun but with a screen with an alternating pattern of red, green, blue and UV (index) phosphor stripes (similarly to a Trinitron) with a side mounted photomultiplier tube or photodiode pointed towards the rear of the screen and mounted on the funnel of CRT, to track the electron beam to activate the phosphors separately from one another using the same electron beam. Only the index phosphor stripe was used for tracking, and it was the only phosphor that was not covered by an aluminum layer. It was shelved because of the precision required to produce it. It was revived by Sony in the 1980s as the Indextron but its adoption was limited, at least in part due to the development of LCD displays. Beam-index CRTs also suffered from poor contrast ratios of only around 50:1 since some light emission by the phosphors was required at all times by the photodiodes to track the electron beam. It allowed for single CRT color CRT projectors due to a lack of shadow mask; normally CRT projectors use three CRTs, one for each color, since a lot of heat is generated due to the high anode voltage and beam current, making a shadow mask impractical and inefficient since it would warp under the heat produced (shadow masks absorb most of the electron beam, and, hence, most of the energy carried by the relativistic electrons); the three CRTs meant that an involved calibration and adjustment procedure had to be carried out during installation of the projector, and moving the projector would require it to be recalibrated. A single CRT meant the need for calibration was eliminated, but brightness was decreased since the CRT screen had to be used for three colors instead of each color having its own CRT screen. A stripe pattern also imposes a horizontal resolution limit; in contrast, three-screen CRT projectors have no theoretical resolution limit, due to them having single, uniform phosphor coatings.
Flat CRTs
Flat CRTs are those with a flat screen. Despite having a flat screen, they may not be completely flat, especially on the inside, instead having a greatly increased curvature. A notable exception is the LG Flatron (made by LG.Philips Displays, later LP Displays) which is truly flat on the outside and inside, but has a bonded glass pane on the screen with a tensioned rim band to provide implosion protection. Such completely flat CRTs were first introduced by Zenith in 1986, and used flat tensioned shadow masks, where the shadow mask is held under tension, providing increased resistance to blooming. LG's Flatron technology is based on this technology developed by Zenith, now a subsidiary of LG.
Flat CRTs have a number of challenges, like deflection. Vertical deflection boosters are required to increase the amount of current that is sent to the vertical deflection coils to compensate for the reduced curvature. The CRTs used in the Sinclair TV80, and in many Sony Watchmans were flat in that they were not deep and their front screens were flat, but their electron guns were put to a side of the screen. The TV80 used electrostatic deflection while the Watchman used magnetic deflection with a phosphor screen that was curved inwards. Similar CRTs were used in video door bells.
Radar CRTs
Radar CRTs such as the 7JP4 had a circular screen and scanned the beam from the center outwards. The deflection yoke rotated, causing the beam to rotate in a circular fashion. The screen often had two colors, often a bright short persistence color that only appeared as the beam scanned the display and a long persistence phosphor afterglow. When the beam strikes the phosphor, the phosphor brightly illuminates, and when the beam leaves, the dimmer long persistence afterglow would remain lit where the beam struck the phosphor, alongside the radar targets that were "written" by the beam, until the beam re-struck the phosphor.
Oscilloscope CRTs
In oscilloscope CRTs, electrostatic deflection is used, rather than the magnetic deflection commonly used with TV and other large CRTs. The beam is deflected horizontally by applying an electric field between a pair of plates to its left and right, and vertically by applying an electric field to plates above and below. TVs use magnetic rather than electrostatic deflection because the deflection plates obstruct the beam when the deflection angle is as large as is required for tubes that are relatively short for their size. Some Oscilloscope CRTs incorporate post deflection anodes (PDAs) that are spiral-shaped to ensure even anode potential across the CRT and operate at up to 15 kV. In PDA CRTs the electron beam is deflected before it is accelerated, improving sensitivity and legibility, specially when analyzing voltage pulses with short duty cycles.
Microchannel plate
When displaying fast one-shot events, the electron beam must deflect very quickly, with few electrons impinging on the screen, leading to a faint or invisible image on the display. Oscilloscope CRTs designed for very fast signals can give a brighter display by passing the electron beam through a micro-channel plate just before it reaches the screen. Through the phenomenon of secondary emission, this plate multiplies the number of electrons reaching the phosphor screen, giving a significant improvement in writing rate (brightness) and improved sensitivity and spot size as well.
Graticules
Most oscilloscopes have a graticule as part of the visual display, to facilitate measurements. The graticule may be permanently marked inside the face of the CRT, or it may be a transparent external plate made of glass or acrylic plastic. An internal graticule eliminates parallax error, but cannot be changed to accommodate different types of measurements. Oscilloscopes commonly provide a means for the graticule to be illuminated from the side, which improves its visibility.
Image storage tubes
These are found in analog phosphor storage oscilloscopes. These are distinct from digital storage oscilloscopes which rely on solid state digital memory to store the image.
Where a single brief event is monitored by an oscilloscope, such an event will be displayed by a conventional tube only while it actually occurs. The use of a long persistence phosphor may allow the image to be observed after the event, but only for a few seconds at best. This limitation can be overcome by the use of a direct view storage cathode-ray tube (storage tube). A storage tube will continue to display the event after it has occurred until such time as it is erased. A storage tube is similar to a conventional tube except that it is equipped with a metal grid coated with a dielectric layer located immediately behind the phosphor screen. An externally applied voltage to the mesh initially ensures that the whole mesh is at a constant potential. This mesh is constantly exposed to a low velocity electron beam from a 'flood gun' which operates independently of the main gun. This flood gun is not deflected like the main gun but constantly 'illuminates' the whole of the storage mesh. The initial charge on the storage mesh is such as to repel the electrons from the flood gun which are prevented from striking the phosphor screen.
When the main electron gun writes an image to the screen, the energy in the main beam is sufficient to create a 'potential relief' on the storage mesh. The areas where this relief is created no longer repel the electrons from the flood gun which now pass through the mesh and illuminate the phosphor screen. Consequently, the image that was briefly traced out by the main gun continues to be displayed after it has occurred. The image can be 'erased' by resupplying the external voltage to the mesh restoring its constant potential. The time for which the image can be displayed was limited because, in practice, the flood gun slowly neutralises the charge on the storage mesh. One way of allowing the image to be retained for longer is temporarily to turn off the flood gun. It is then possible for the image to be retained for several days. The majority of storage tubes allow for a lower voltage to be applied to the storage mesh which slowly restores the initial charge state. By varying this voltage a variable persistence is obtained. Turning off the flood gun and the voltage supply to the storage mesh allows such a tube to operate as a conventional oscilloscope tube.
Vector monitors
Vector monitors were used in early computer aided design systems and are in some late-1970s to mid-1980s arcade games such as Asteroids.
They draw graphics point-to-point, rather than scanning a raster. Either monochrome or color CRTs can be used in vector displays, and the essential principles of CRT design and operation are the same for either type of display; the main difference is in the beam deflection patterns and circuits.
Data storage tubes
The Williams tube or Williams-Kilburn tube was a cathode-ray tube used to electronically store binary data. It was used in computers of the 1940s as a random-access digital storage device. In contrast to other CRTs in this article, the Williams tube was not a display device, and in fact could not be viewed since a metal plate covered its screen.
Cat's eye
In some vacuum tube radio sets, a "Magic Eye" or "Tuning Eye" tube was provided to assist in tuning the receiver. Tuning would be adjusted until the width of a radial shadow was minimized. This was used instead of a more expensive electromechanical meter, which later came to be used on higher-end tuners when transistor sets lacked the high voltage required to drive the device. The same type of device was used with tape recorders as a recording level meter, and for various other applications including electrical test equipment.
Charactrons
Some displays for early computers (those that needed to display more text than was practical using vectors, or that required high speed for photographic output) used Charactron CRTs. These incorporate a perforated metal character mask (stencil), which shapes a wide electron beam to form a character on the screen. The system selects a character on the mask using one set of deflection circuits, but that causes the extruded beam to be aimed off-axis, so a second set of deflection plates has to re-aim the beam so it is headed toward the center of the screen. A third set of plates places the character wherever required. The beam is unblanked (turned on) briefly to draw the character at that position. Graphics could be drawn by selecting the position on the mask corresponding to the code for a space (in practice, they were simply not drawn), which had a small round hole in the center; this effectively disabled the character mask, and the system reverted to regular vector behavior. Charactrons had exceptionally long necks, because of the need for three deflection systems.
Nimo
Nimo was the trademark of a family of small specialised CRTs manufactured by Industrial Electronic Engineers. These had 10 electron guns which produced electron beams in the form of digits in a manner similar to that of the charactron. The tubes were either simple single-digit displays or more complex 4- or 6- digit displays produced by means of a suitable magnetic deflection system. Having little of the complexities of a standard CRT, the tube required a relatively simple driving circuit, and as the image was projected on the glass face, it provided a much wider viewing angle than competitive types (e.g., nixie tubes). However, their requirement for several voltages and their high voltage made them uncommon.
Flood-beam CRT
Flood-beam CRTs are small tubes that are arranged as pixels for large video walls like Jumbotrons. The first screen using this technology (called Diamond Vision by Mitsubishi Electric) was introduced by Mitsubishi Electric for the 1980 Major League Baseball All-Star Game. It differs from a normal CRT in that the electron gun within does not produce a focused controllable beam. Instead, electrons are sprayed in a wide cone across the entire front of the phosphor screen, basically making each unit act as a single light bulb. Each one is coated with a red, green or blue phosphor, to make up the color sub-pixels. This technology has largely been replaced with light-emitting diode displays. Unfocused and undeflected CRTs were used as grid-controlled stroboscope lamps since 1958. Electron-stimulated luminescence (ESL) lamps, which use the same operating principle, were released in 2011.
Print-head CRT
CRTs with an unphosphored front glass but with fine wires embedded in it were used as electrostatic print heads in the 1960s. The wires would pass the electron beam current through the glass onto a sheet of paper where the desired content was therefore deposited as an electrical charge pattern. The paper was then passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image.
Zeus – thin CRT display
In the late 1990s and early 2000s Philips Research Laboratories experimented with a type of thin CRT known as the Zeus display, which contained CRT-like functionality in a flat-panel display. The cathode of this display was mounted under the front of the display, and the electrons from the cathode would be directed to the back to the display where they would stay until extracted by electrodes near the front of the display, and directed to the front of the display which had phosphor dots. The devices were demonstrated but never marketed.
Slimmer CRT
Some CRT manufacturers, both LG.Philips Displays (later LP Displays) and Samsung SDI, innovated CRT technology by creating a slimmer tube. Slimmer CRT had the trade names Superslim, Ultraslim, Vixlim (by Samsung) and Cybertube and Cybertube+ (both by LG Philips displays). A flat CRT has a depth. The depth of Superslim was and Ultraslim was .
Health concerns
Ionizing radiation
CRTs can emit a small amount of X-ray radiation; this is a result of the electron beam's bombardment of the shadow mask/aperture grille and phosphors, which produces bremsstrahlung (braking radiation) as the high-energy electrons are decelerated. The amount of radiation escaping the front of the monitor is widely considered to be not harmful. The Food and Drug Administration regulations in are used to strictly limit, for instance, TV receivers to 0.5 milliroentgens per hour at a distance of from any external surface; since 2007, most CRTs have emissions that fall well below this limit. Note that the roentgen is an outdated unit and does not account for dose absorption. The conversion rate is about .877 roentgen per rem. Assuming that the viewer absorbed the entire dose (which is unlikely), and that they watched TV for 2 hours a day, a .5 milliroentgen hourly dose would increase the viewers yearly dose by 320 millirem. For comparison, the average background radiation in the United States is 310 millirem a year. Negative effects of chronic radiation are not generally noticeable until doses over 20,000 millirem.
The density of the x-rays that would be generated by a CRT is low because the raster scan of a typical CRT distributes the energy of the electron beam across the entire screen. Voltages above 15,000 volts are enough to generate "soft" x-rays. However, since CRTs may stay on for several hours at a time, the amount of x-rays generated by the CRT may become significant, hence the importance of using materials to shield against x-rays, such as the thick leaded glass and barium-strontium glass used in CRTs.
Concerns about x-rays emitted by CRTs began in 1967 when it was found that TV sets made by General Electric were emitting "X-radiation in excess of desirable levels". It was later found that TV sets from all manufacturers were also emitting radiation. This caused TV industry representatives to be brought before a U.S. congressional committee, which later proposed a federal radiation regulation bill, which became the 1968 Radiation Control for Health and Safety Act. It was recommended to TV set owners to always be at a distance of at least 6 feet from the screen of the TV set, and to avoid "prolonged exposure" at the sides, rear or underneath a TV set. It was discovered that most of the radiation was directed downwards. Owners were also told to not modify their set's internals to avoid exposure to radiation. Headlines about "radioactive" TV sets continued until the end of the 1960s. There once was a proposal by two New York congressmen that would have forced TV set manufacturers to "go into homes to test all of the nation's 15 million color sets and to install radiation devices in them". The FDA eventually began regulating radiation emissions from all electronic products in the US.
Toxicity
Older color and monochrome CRTs may have been manufactured with toxic substances, such as cadmium, in the phosphors. The rear glass tube of modern CRTs may be made from leaded glass, which represent an environmental hazard if disposed of improperly. Since 1970, glass in the front panel (the viewable portion of the CRT) used strontium oxide rather than lead, though the rear of the CRT was still produced from leaded glass. Monochrome CRTs typically do not contain enough leaded glass to fail EPA TCLP tests. While the TCLP process grinds the glass into fine particles in order to expose them to weak acids to test for leachate, intact CRT glass does not leach (The lead is vitrified, contained inside the glass itself, similar to leaded glass crystalware).
Flicker
At low refresh rates (60 Hz and below), the periodic scanning of the display may produce a flicker that some people perceive more easily than others, especially when viewed with peripheral vision. Flicker is commonly associated with CRT as most TVs run at 50 Hz (PAL) or 60 Hz (NTSC), although there are some 100 Hz PAL TVs that are flicker-free. Typically only low-end monitors run at such low frequencies, with most computer monitors supporting at least 75 Hz and high-end monitors capable of 100 Hz or more to eliminate any perception of flicker. Though the 100 Hz PAL was often achieved using interleaved scanning, dividing the circuit and scan into two beams of 50 Hz. Non-computer CRTs or CRT for sonar or radar may have long persistence phosphor and are thus flicker free. If the persistence is too long on a video display, moving images will be blurred.
High-frequency audible noise
50 Hz/60 Hz CRTs used for TV operate with horizontal scanning frequencies of 15,750 and 15,734.27 Hz (for NTSC systems) or 15,625 Hz (for PAL systems). These frequencies are at the upper range of human hearing and are inaudible to many people; however, some people (especially children) will perceive a high-pitched tone near an operating CRT TV. The sound is due to magnetostriction in the magnetic core and periodic movement of windings of the flyback transformer but the sound can also be created by movement of the deflection coils, yoke or ferrite beads.
This problem does not occur on 100/120 Hz TVs and on non-CGA (Color Graphics Adapter) computer displays, because they use much higher horizontal scanning frequencies that produce sound which is inaudible to humans (22 kHz to over 100 kHz).
Implosion
If the glass wall is damaged, atmospheric pressure can implode the vacuum tube into dangerous fragments which accelerate inward and then spray at high speed in all directions. Although modern cathode-ray tubes used in TVs and computer displays have epoxy-bonded face-plates or other measures to prevent shattering of the envelope, CRTs must be handled carefully to avoid injury.
Implosion protection
Early CRTs had a glass plate over the screen that was bonded to it using glue, creating a laminated glass screen: initially the glue was polyvinyl acetate (PVA), while later versions such as the LG Flatron used a resin, perhaps a UV-curable resin. The PVA degrades over time creating a "cataract", a ring of degraded glue around the edges of the CRT that does not allow light from the screen to pass through. Later CRTs instead use a tensioned metal rim band mounted around the perimeter that also provides mounting points for the CRT to be mounted to a housing. In a 19-inch CRT, the tensile stress in the rim band is 70 kg/cm2.
Older CRTs were mounted to the TV set using a frame. The band is tensioned by heating it, then mounting it on the CRT; the band cools afterwards, shrinking in size and putting the glass under compression, which strengthens the glass and reduces the necessary thickness (and hence weight) of the glass. This makes the band an integral component that should never be removed from an intact CRT that still has a vacuum; attempting to remove it may cause the CRT to implode.
The rim band prevents the CRT from imploding should the screen be broken. The rim band may be glued to the perimeter of the CRT using epoxy, preventing cracks from spreading beyond the screen and into the funnel.
Alternatively the compression caused by the rim band may be used to cause any cracks in the screen to propagate laterally at a high speed so that they reach the funnel and fully penetrate it before they fully penetrate the screen. This is possible because the funnel has walls that are thinner than the screen. Fully penetrating the funnel first allows air to enter the CRT from a short distance behind the screen, and prevent an implosion by ensuring the screen is fully penetrated by the cracks and breaks only when the CRT already has air.
Electric shock
To accelerate the electrons from the cathode to the screen with enough energy to achieve sufficient image brightness, a very high voltage (EHT or extra-high tension) is required, from a few thousand volts for a small oscilloscope CRT to tens of thousands for a larger screen color TV. This is many times greater than household power supply voltage. Even after the power supply is turned off, some associated capacitors and the CRT itself may retain a charge for some time and therefore dissipate that charge suddenly through a ground such as an inattentive human grounding a capacitor discharge lead. An average monochrome CRT may use 1–1.5 kV of anode voltage per inch.
Security concerns
Under some circumstances, the signal radiated from the electron guns, scanning circuitry, and associated wiring of a CRT can be captured remotely and used to reconstruct what is shown on the CRT using a process called Van Eck phreaking. Special TEMPEST shielding can mitigate this effect. Such radiation of a potentially exploitable signal, however, occurs also with other display technologies and with electronics in general.
Recycling
Due to the toxins contained in CRT monitors the United States Environmental Protection Agency created rules (in October 2001) stating that CRTs must be brought to special e-waste recycling facilities. In November 2002, the EPA began fining companies that disposed of CRTs through landfills or incineration. Regulatory agencies, local and statewide, monitor the disposal of CRTs and other computer equipment.
As electronic waste, CRTs are considered one of the hardest types to recycle. CRTs have relatively high concentration of lead and , both of which are necessary for the display. There are several companies in the United States that charge a small fee to collect CRTs, then subsidize their labor by selling the harvested copper, wire, and printed circuit boards. The United States Environmental Protection Agency (EPA) includes discarded CRT monitors in its category of "hazardous household waste" but considers CRTs that have been set aside for testing to be commodities if they are not discarded, speculatively accumulated, or left unprotected from weather and other damage.
Various states participate in the recycling of CRTs, each with their reporting requirements for collectors and recycling facilities. For example, in California the recycling of CRTs is governed by CALRecycle, the California Department of Resources Recycling and Recovery through their Payment System. Recycling facilities that accept CRT devices from business and residential sector must obtain contact information such as address and phone number to ensure the CRTs come from a California source in order to participate in the CRT Recycling Payment System.
In Europe, disposal of CRT TVs and monitors is covered by the WEEE Directive.
Multiple methods have been proposed for the recycling of CRT glass. The methods involve thermal, mechanical and chemical processes. All proposed methods remove the lead oxide content from the glass. Some companies operated furnaces to separate the lead from the glass. A coalition called the Recytube project was once formed by several European companies to devise a method to recycle CRTs. The phosphors used in CRTs often contain rare earth metals. A CRT contains about 7 grams of phosphor.
The funnel can be separated from the screen of the CRT using laser cutting, diamond saws or wires or using a resistively heated nichrome wire.
Leaded CRT glass was sold to be remelted into other CRTs, or even broken down and used in road construction or used in tiles, concrete, concrete and cement bricks, fiberglass insulation or used as flux in metals smelting.
A considerable portion of CRT glass is landfilled, where it can pollute the surrounding environment. It is more common for CRT glass to be disposed of than being recycled.
See also
Cathodoluminescence
Crookes tube
Scintillation (physics)
Laser-powered phosphor display, similar to a CRT, replaces the electron beam with a laser beam
Applying CRT in different display-purpose:
Analog television
Image displaying
Comparison of CRT, LCD, plasma, and OLED displays
Overscan
Raster scan
Scan line
Historical aspects:
Direct-view bistable storage tube
Flat-panel display
Geer tube
History of display technology
Image dissector
LCD television, LED-backlit LCD, LED display
Penetron
Surface-conduction electron-emitter display
Trinitron
Safety and precautions:
Monitor filter
Photosensitive epilepsy
TCO Certification
References
Selected patents
: Zworykin Television System
External links
Consumer electronics
Display technology
Television technology
Vacuum tube displays
Audiovisual introductions in 1897
Telecommunications-related introductions in 1897
Articles containing video clips
Legacy hardware | Cathode-ray tube | [
"Technology",
"Engineering"
] | 24,545 | [
"Information and communications technology",
"Electronic engineering",
"Television technology",
"Display technology"
] |
6,015 | https://en.wikipedia.org/wiki/Crystal | A crystal or crystalline solid is a solid material whose constituents (such as atoms, molecules, or ions) are arranged in a highly ordered microscopic structure, forming a crystal lattice that extends in all directions. In addition, macroscopic single crystals are usually identifiable by their geometrical shape, consisting of flat faces with specific, characteristic orientations. The scientific study of crystals and crystal formation is known as crystallography. The process of crystal formation via mechanisms of crystal growth is called crystallization or solidification.
The word crystal derives from the Ancient Greek word (), meaning both "ice" and "rock crystal", from (), "icy cold, frost".
Examples of large crystals include snowflakes, diamonds, and table salt. Most inorganic solids are not crystals but polycrystals, i.e. many microscopic crystals fused together into a single solid. Polycrystals include most metals, rocks, ceramics, and ice. A third category of solids is amorphous solids, where the atoms have no periodic structure whatsoever. Examples of amorphous solids include glass, wax, and many plastics.
Despite the name, lead crystal, crystal glass, and related products are not crystals, but rather types of glass, i.e. amorphous solids.
Crystals, or crystalline solids, are often used in pseudoscientific practices such as crystal therapy, and, along with gemstones, are sometimes associated with spellwork in Wiccan beliefs and related religious movements.
Crystal structure (microscopic)
The scientific definition of a "crystal" is based on the microscopic arrangement of atoms inside it, called the crystal structure. A crystal is a solid where the atoms form a periodic arrangement. (Quasicrystals are an exception, see below).
Not all solids are crystals. For example, when liquid water starts freezing, the phase change begins with small ice crystals that grow until they fuse, forming a polycrystalline structure. In the final block of ice, each of the small crystals (called "crystallites" or "grains") is a true crystal with a periodic arrangement of atoms, but the whole polycrystal does not have a periodic arrangement of atoms, because the periodic pattern is broken at the grain boundaries. Most macroscopic inorganic solids are polycrystalline, including almost all metals, ceramics, ice, rocks, etc. Solids that are neither crystalline nor polycrystalline, such as glass, are called amorphous solids, also called glassy, vitreous, or noncrystalline. These have no periodic order, even microscopically. There are distinct differences between crystalline solids and amorphous solids: most notably, the process of forming a glass does not release the latent heat of fusion, but forming a crystal does.
A crystal structure (an arrangement of atoms in a crystal) is characterized by its unit cell, a small imaginary box containing one or more atoms in a specific spatial arrangement. The unit cells are stacked in three-dimensional space to form the crystal.
The symmetry of a crystal is constrained by the requirement that the unit cells stack perfectly with no gaps. There are 219 possible crystal symmetries (230 is commonly cited, but this treats chiral equivalents as separate entities), called crystallographic space groups. These are grouped into 7 crystal systems, such as cubic crystal system (where the crystals may form cubes or rectangular boxes, such as halite shown at right) or hexagonal crystal system (where the crystals may form hexagons, such as ordinary water ice).
Crystal faces, shapes and crystallographic forms
Crystals are commonly recognized, macroscopically, by their shape, consisting of flat faces with sharp angles. These shape characteristics are not necessary for a crystal—a crystal is scientifically defined by its microscopic atomic arrangement, not its macroscopic shape—but the characteristic macroscopic shape is often present and easy to see.
Euhedral crystals are those that have obvious, well-formed flat faces. Anhedral crystals do not, usually because the crystal is one grain in a polycrystalline solid.
The flat faces (also called facets) of a euhedral crystal are oriented in a specific way relative to the underlying atomic arrangement of the crystal: they are planes of relatively low Miller index. This occurs because some surface orientations are more stable than others (lower surface energy). As a crystal grows, new atoms attach easily to the rougher and less stable parts of the surface, but less easily to the flat, stable surfaces. Therefore, the flat surfaces tend to grow larger and smoother, until the whole crystal surface consists of these plane surfaces. (See diagram on right.)
One of the oldest techniques in the science of crystallography consists of measuring the three-dimensional orientations of the faces of a crystal, and using them to infer the underlying crystal symmetry.
A crystal's crystallographic forms are sets of possible faces of the crystal that are related by one of the symmetries of the crystal. For example, crystals of galena often take the shape of cubes, and the six faces of the cube belong to a crystallographic form that displays one of the symmetries of the isometric crystal system. Galena also sometimes crystallizes as octahedrons, and the eight faces of the octahedron belong to another crystallographic form reflecting a different symmetry of the isometric system. A crystallographic form is described by placing the Miller indices of one of its faces within brackets. For example, the octahedral form is written as {111}, and the other faces in the form are implied by the symmetry of the crystal.
Forms may be closed, meaning that the form can completely enclose a volume of space, or open, meaning that it cannot. The cubic and octahedral forms are examples of closed forms. All the forms of the isometric system are closed, while all the forms of the monoclinic and triclinic crystal systems are open. A crystal's faces may all belong to the same closed form, or they may be a combination of multiple open or closed forms.
A crystal's habit is its visible external shape. This is determined by the crystal structure (which restricts the possible facet orientations), the specific crystal chemistry and bonding (which may favor some facet types over others), and the conditions under which the crystal formed.
Occurrence in nature
Rocks
By volume and weight, the largest concentrations of crystals in the Earth are part of its solid bedrock. Crystals found in rocks typically range in size from a fraction of a millimetre to several centimetres across, although exceptionally large crystals are occasionally found. , the world's largest known naturally occurring crystal is a crystal of beryl from Malakialina, Madagascar, long and in diameter, and weighing .
Some crystals have formed by magmatic and metamorphic processes, giving origin to large masses of crystalline rock. The vast majority of igneous rocks are formed from molten magma and the degree of crystallization depends primarily on the conditions under which they solidified. Such rocks as granite, which have cooled very slowly and under great pressures, have completely crystallized; but many kinds of lava were poured out at the surface and cooled very rapidly, and in this latter group a small amount of amorphous or glassy matter is common. Other crystalline rocks, the metamorphic rocks such as marbles, mica-schists and quartzites, are recrystallized. This means that they were at first fragmental rocks like limestone, shale and sandstone and have never been in a molten condition nor entirely in solution, but the high temperature and pressure conditions of metamorphism have acted on them by erasing their original structures and inducing recrystallization in the solid state.
Other rock crystals have formed out of precipitation from fluids, commonly water, to form druses or quartz veins. Evaporites such as halite, gypsum and some limestones have been deposited from aqueous solution, mostly owing to evaporation in arid climates.
Ice
Water-based ice in the form of snow, sea ice, and glaciers are common crystalline/polycrystalline structures on Earth and other planets. A single snowflake is a single crystal or a collection of crystals, while an ice cube is a polycrystal. Ice crystals may form from cooling liquid water below its freezing point, such as ice cubes or a frozen lake. Frost, snowflakes, or small ice crystals suspended in the air (ice fog) more often grow from a supersaturated gaseous-solution of water vapor and air, when the temperature of the air drops below its dew point, without passing through a liquid state. Another unusual property of water is that it expands rather than contracts when it crystallizes.
Organigenic crystals
Many living organisms are able to produce crystals grown from an aqueous solution, for example calcite and aragonite in the case of most molluscs or hydroxylapatite in the case of bones and teeth in vertebrates.
Polymorphism and allotropy
The same group of atoms can often solidify in many different ways. Polymorphism is the ability of a solid to exist in more than one crystal form. For example, water ice is ordinarily found in the hexagonal form Ice Ih, but can also exist as the cubic Ice Ic, the rhombohedral ice II, and many other forms. The different polymorphs are usually called different phases.
In addition, the same atoms may be able to form noncrystalline phases. For example, water can also form amorphous ice, while SiO2 can form both fused silica (an amorphous glass) and quartz (a crystal). Likewise, if a substance can form crystals, it can also form polycrystals.
For pure chemical elements, polymorphism is known as allotropy. For example, diamond and graphite are two crystalline forms of carbon, while amorphous carbon is a noncrystalline form. Polymorphs, despite having the same atoms, may have very different properties. For example, diamond is the hardest substance known, while graphite is so soft that it is used as a lubricant. Chocolate can form six different types of crystals, but only one has the suitable hardness and melting point for candy bars and confections. Polymorphism in steel is responsible for its ability to be heat treated, giving it a wide range of properties.
Polyamorphism is a similar phenomenon where the same atoms can exist in more than one amorphous solid form.
Crystallization
Crystallization is the process of forming a crystalline structure from a fluid or from materials dissolved in a fluid. (More rarely, crystals may be deposited directly from gas; see: epitaxy and frost.)
Crystallization is a complex and extensively-studied field, because depending on the conditions, a single fluid can solidify into many different possible forms. It can form a single crystal, perhaps with various possible phases, stoichiometries, impurities, defects, and habits. Or, it can form a polycrystal, with various possibilities for the size, arrangement, orientation, and phase of its grains. The final form of the solid is determined by the conditions under which the fluid is being solidified, such as the chemistry of the fluid, the ambient pressure, the temperature, and the speed with which all these parameters are changing.
Specific industrial techniques to produce large single crystals (called boules) include the Czochralski process and the Bridgman technique. Other less exotic methods of crystallization may be used, depending on the physical properties of the substance, including hydrothermal synthesis, sublimation, or simply solvent-based crystallization.
Large single crystals can be created by geological processes. For example, selenite crystals in excess of 10 m are found in the Cave of the Crystals in Naica, Mexico. For more details on geological crystal formation, see above.
Crystals can also be formed by biological processes, see above. Conversely, some organisms have special techniques to prevent crystallization from occurring, such as antifreeze proteins.
Defects, impurities, and twinning
An ideal crystal has every atom in a perfect, exactly repeating pattern. However, in reality, most crystalline materials have a variety of crystallographic defects: places where the crystal's pattern is interrupted. The types and structures of these defects may have a profound effect on the properties of the materials.
A few examples of crystallographic defects include vacancy defects (an empty space where an atom should fit), interstitial defects (an extra atom squeezed in where it does not fit), and dislocations (see figure at right). Dislocations are especially important in materials science, because they help determine the mechanical strength of materials.
Another common type of crystallographic defect is an impurity, meaning that the "wrong" type of atom is present in a crystal. For example, a perfect crystal of diamond would only contain carbon atoms, but a real crystal might perhaps contain a few boron atoms as well. These boron impurities change the diamond's color to slightly blue. Likewise, the only difference between ruby and sapphire is the type of impurities present in a corundum crystal.
In semiconductors, a special type of impurity, called a dopant, drastically changes the crystal's electrical properties. Semiconductor devices, such as transistors, are made possible largely by putting different semiconductor dopants into different places, in specific patterns.
Twinning is a phenomenon somewhere between a crystallographic defect and a grain boundary. Like a grain boundary, a twin boundary has different crystal orientations on its two sides. But unlike a grain boundary, the orientations are not random, but related in a specific, mirror-image way.
Mosaicity is a spread of crystal plane orientations. A mosaic crystal consists of smaller crystalline units that are somewhat misaligned with respect to each other.
Chemical bonds
In general, solids can be held together by various types of chemical bonds, such as metallic bonds, ionic bonds, covalent bonds, van der Waals bonds, and others. None of these are necessarily crystalline or non-crystalline. However, there are some general trends as follows:
Metals crystallize rapidly and are almost always polycrystalline, though there are exceptions like amorphous metal and single-crystal metals. The latter are grown synthetically, for example, fighter-jet turbines are typically made by first growing a single crystal of titanium alloy, increasing its strength and melting point over polycrystalline titanium. A small piece of metal may naturally form into a single crystal, such as Type 2 telluric iron, but larger pieces generally do not unless extremely slow cooling occurs. For example, iron meteorites are often composed of single crystal, or many large crystals that may be several meters in size, due to very slow cooling in the vacuum of space. The slow cooling may allow the precipitation of a separate phase within the crystal lattice, which form at specific angles determined by the lattice, called Widmanstatten patterns.
Ionic compounds typically form when a metal reacts with a non-metal, such as sodium with chlorine. These often form substances called salts, such as sodium chloride (table salt) or potassium nitrate (saltpeter), with crystals that are often brittle and cleave relatively easily. Ionic materials are usually crystalline or polycrystalline. In practice, large salt crystals can be created by solidification of a molten fluid, or by crystallization out of a solution. Some ionic compounds can be very hard, such as oxides like aluminium oxide found in many gemstones such as ruby and synthetic sapphire.
Covalently bonded solids (sometimes called covalent network solids) are typically formed from one or more non-metals, such as carbon or silicon and oxygen, and are often very hard, rigid, and brittle. These are also very common, notable examples being diamond and quartz respectively.
Weak van der Waals forces also help hold together certain crystals, such as crystalline molecular solids, as well as the interlayer bonding in graphite. Substances such as fats, lipids and wax form molecular bonds because the large molecules do not pack as tightly as atomic bonds. This leads to crystals that are much softer and more easily pulled apart or broken. Common examples include chocolates, candles, or viruses. Water ice and dry ice are examples of other materials with molecular bonding.Polymer materials generally will form crystalline regions, but the lengths of the molecules usually prevent complete crystallization—and sometimes polymers are completely amorphous.
Quasicrystals
A quasicrystal consists of arrays of atoms that are ordered but not strictly periodic. They have many attributes in common with ordinary crystals, such as displaying a discrete pattern in x-ray diffraction, and the ability to form shapes with smooth, flat faces.
Quasicrystals are most famous for their ability to show five-fold symmetry, which is impossible for an ordinary periodic crystal (see crystallographic restriction theorem).
The International Union of Crystallography has redefined the term "crystal" to include both ordinary periodic crystals and quasicrystals ("any solid having an essentially discrete diffraction diagram").
Quasicrystals, first discovered in 1982, are quite rare in practice. Only about 100 solids are known to form quasicrystals, compared to about 400,000 periodic crystals known in 2004. The 2011 Nobel Prize in Chemistry was awarded to Dan Shechtman for the discovery of quasicrystals.
Special properties from anisotropy
Crystals can have certain special electrical, optical, and mechanical properties that glass and polycrystals normally cannot. These properties are related to the anisotropy of the crystal, i.e. the lack of rotational symmetry in its atomic arrangement. One such property is the piezoelectric effect, where a voltage across the crystal can shrink or stretch it. Another is birefringence, where a double image appears when looking through a crystal. Moreover, various properties of a crystal, including electrical conductivity, electrical permittivity, and Young's modulus, may be different in different directions in a crystal. For example, graphite crystals consist of a stack of sheets, and although each individual sheet is mechanically very strong, the sheets are rather loosely bound to each other. Therefore, the mechanical strength of the material is quite different depending on the direction of stress.
Not all crystals have all of these properties. Conversely, these properties are not quite exclusive to crystals. They can appear in glasses or polycrystals that have been made anisotropic by working or stress—for example, stress-induced birefringence.
Crystallography
Crystallography is the science of measuring the crystal structure (in other words, the atomic arrangement) of a crystal. One widely used crystallography technique is X-ray diffraction. Large numbers of known crystal structures are stored in crystallographic databases.
Image gallery
See also
Atomic packing factor
Anticrystal
Cocrystal
Colloidal crystal
Crystal growth
Crystal oscillator
Liquid crystal
Time crystal
References
Further reading | Crystal | [
"Chemistry",
"Materials_science"
] | 3,947 | [
"Crystallography",
"Crystals"
] |
6,016 | https://en.wikipedia.org/wiki/Cytosine | Cytosine () (symbol C or Cyt) is one of the four nucleotide bases found in DNA and RNA, along with adenine, guanine, and thymine (uracil in RNA). It is a pyrimidine derivative, with a heterocyclic aromatic ring and two substituents attached (an amine group at position 4 and a keto group at position 2). The nucleoside of cytosine is cytidine. In Watson–Crick base pairing, it forms three hydrogen bonds with guanine.
History
Cytosine was discovered and named by Albrecht Kossel and Albert Neumann in 1894 when it was hydrolyzed from calf thymus tissues. A structure was proposed in 1903, and was synthesized (and thus confirmed) in the laboratory in the same year.
In 1998, cytosine was used in an early demonstration of quantum information processing when Oxford University researchers implemented the Deutsch–Jozsa algorithm on a two qubit nuclear magnetic resonance quantum computer (NMRQC).
In March 2015, NASA scientists reported the formation of cytosine, along with uracil and thymine, from pyrimidine under the space-like laboratory conditions, which is of interest because pyrimidine has been found in meteorites although its origin is unknown.
Chemical reactions
Cytosine can be found as part of DNA, as part of RNA, or as a part of a nucleotide. As cytidine triphosphate (CTP), it can act as a co-factor to enzymes, and can transfer a phosphate to convert adenosine diphosphate (ADP) to adenosine triphosphate (ATP).
In DNA and RNA, cytosine is paired with guanine. However, it is inherently unstable, and can change into uracil (spontaneous deamination). This can lead to a point mutation if not repaired by the DNA repair enzymes such as uracil glycosylase, which cleaves a uracil in DNA.
Cytosine can also be methylated into 5-methylcytosine by an enzyme called DNA methyltransferase or be methylated and hydroxylated to make 5-hydroxymethylcytosine. The difference in rates of deamination of cytosine and 5-methylcytosine (to uracil and thymine) forms the basis of bisulfite sequencing.
Biological function
When found third in a codon of RNA, cytosine is synonymous with uracil, as they are interchangeable as the third base.
When found as the second base in a codon, the third is always interchangeable. For example, UCU, UCC, UCA and UCG are all serine, regardless of the third base.
Active enzymatic deamination of cytosine or 5-methylcytosine by the APOBEC family of cytosine deaminases could have both beneficial and detrimental implications on various cellular processes as well as on organismal evolution. The implications of deamination on 5-hydroxymethylcytosine, on the other hand, remains less understood.
Theoretical aspects
Until October 2021, Cytosine had not been found in meteorites, which suggested the first strands of RNA and DNA had to look elsewhere to obtain this building block. Cytosine likely formed within some meteorite parent bodies, however did not persist within these bodies due to an effective deamination reaction into uracil.
In October 2021, Cytosine was announced as having been found in meteorites by researchers in a joint Japan/NASA project, that used novel methods of detection which avoided damaging nucleotides as they were extracted from meteorites.
References
External links and citations
Cytosine MS Spectrum
Nucleobases
Amines
Pyrimidones | Cytosine | [
"Chemistry"
] | 802 | [
"Amines",
"Bases (chemistry)",
"Functional groups"
] |
6,019 | https://en.wikipedia.org/wiki/Computational%20chemistry | Computational chemistry is a branch of chemistry that uses computer simulations to assist in solving chemical problems. It uses methods of theoretical chemistry incorporated into computer programs to calculate the structures and properties of molecules, groups of molecules, and solids. The importance of this subject stems from the fact that, with the exception of some relatively recent findings related to the hydrogen molecular ion (dihydrogen cation), achieving an accurate quantum mechanical depiction of chemical systems analytically, or in a closed form, is not feasible. The complexity inherent in the many-body problem exacerbates the challenge of providing detailed descriptions of quantum mechanical systems. While computational results normally complement information obtained by chemical experiments, it can occasionally predict unobserved chemical phenomena.
Overview
Computational chemistry differs from theoretical chemistry, which involves a mathematical description of chemistry. However, computational chemistry involves the usage of computer programs and additional mathematical skills in order to accurately model various chemical problems. In theoretical chemistry, chemists, physicists, and mathematicians develop algorithms and computer programs to predict atomic and molecular properties and reaction paths for chemical reactions. Computational chemists, in contrast, may simply apply existing computer programs and methodologies to specific chemical questions.
Historically, computational chemistry has had two different aspects:
Computational studies, used to find a starting point for a laboratory synthesis or to assist in understanding experimental data, such as the position and source of spectroscopic peaks.
Computational studies, used to predict the possibility of so far entirely unknown molecules or to explore reaction mechanisms not readily studied via experiments.
These aspects, along with computational chemistry's purpose, have resulted in a whole host of algorithms.
History
Building on the founding discoveries and theories in the history of quantum mechanics, the first theoretical calculations in chemistry were those of Walter Heitler and Fritz London in 1927, using valence bond theory. The books that were influential in the early development of computational quantum chemistry include Linus Pauling and E. Bright Wilson's 1935 Introduction to Quantum Mechanics – with Applications to Chemistry, Eyring, Walter and Kimball's 1944 Quantum Chemistry, Heitler's 1945 Elementary Wave Mechanics – with Applications to Quantum Chemistry, and later Coulson's 1952 textbook Valence, each of which served as primary references for chemists in the decades to follow.
With the development of efficient computer technology in the 1940s, the solutions of elaborate wave equations for complex atomic systems began to be a realizable objective. In the early 1950s, the first semi-empirical atomic orbital calculations were performed. Theoretical chemists became extensive users of the early digital computers. One significant advancement was marked by Clemens C. J. Roothaan's 1951 paper in the Reviews of Modern Physics. This paper focused largely on the "LCAO MO" approach (Linear Combination of Atomic Orbitals Molecular Orbitals). For many years, it was the second-most cited paper in that journal. A very detailed account of such use in the United Kingdom is given by Smith and Sutcliffe. The first ab initio Hartree–Fock method calculations on diatomic molecules were performed in 1956 at MIT, using a basis set of Slater orbitals. For diatomic molecules, a systematic study using a minimum basis set and the first calculation with a larger basis set were published by Ransil and Nesbet respectively in 1960. The first polyatomic calculations using Gaussian orbitals were performed in the late 1950s. The first configuration interaction calculations were performed in Cambridge on the EDSAC computer in the 1950s using Gaussian orbitals by Boys and coworkers. By 1971, when a bibliography of ab initio calculations was published, the largest molecules included were naphthalene and azulene. Abstracts of many earlier developments in ab initio theory have been published by Schaefer.
In 1964, Hückel method calculations (using a simple linear combination of atomic orbitals (LCAO) method to determine electron energies of molecular orbitals of π electrons in conjugated hydrocarbon systems) of molecules, ranging in complexity from butadiene and benzene to ovalene, were generated on computers at Berkeley and Oxford. These empirical methods were replaced in the 1960s by semi-empirical methods such as CNDO.
In the early 1970s, efficient ab initio computer programs such as ATMOL, Gaussian, IBMOL, and POLYAYTOM, began to be used to speed ab initio calculations of molecular orbitals. Of these four programs, only Gaussian, now vastly expanded, is still in use, but many other programs are now in use. At the same time, the methods of molecular mechanics, such as MM2 force field, were developed, primarily by Norman Allinger.
One of the first mentions of the term computational chemistry can be found in the 1970 book Computers and Their Role in the Physical Sciences by Sidney Fernbach and Abraham Haskell Taub, where they state "It seems, therefore, that 'computational chemistry' can finally be more and more of a reality." During the 1970s, widely different methods began to be seen as part of a new emerging discipline of computational chemistry. The Journal of Computational Chemistry was first published in 1980.
Computational chemistry has featured in several Nobel Prize awards, most notably in 1998 and 2013. Walter Kohn, "for his development of the density-functional theory", and John Pople, "for his development of computational methods in quantum chemistry", received the 1998 Nobel Prize in Chemistry. Martin Karplus, Michael Levitt and Arieh Warshel received the 2013 Nobel Prize in Chemistry for "the development of multiscale models for complex chemical systems".
Applications
There are several fields within computational chemistry.
The prediction of the molecular structure of molecules by the use of the simulation of forces, or more accurate quantum chemical methods, to find stationary points on the energy surface as the position of the nuclei is varied.
Storing and searching for data on chemical entities (see chemical databases).
Identifying correlations between chemical structures and properties (see quantitative structure–property relationship (QSPR) and quantitative structure–activity relationship (QSAR)).
Computational approaches to help in the efficient synthesis of compounds.
Computational approaches to design molecules that interact in specific ways with other molecules (e.g. drug design and catalysis).
These fields can give rise to several applications as shown below.
Catalysis
Computational chemistry is a tool for analyzing catalytic systems without doing experiments. Modern electronic structure theory and density functional theory has allowed researchers to discover and understand catalysts. Computational studies apply theoretical chemistry to catalysis research. Density functional theory methods calculate the energies and orbitals of molecules to give models of those structures. Using these methods, researchers can predict values like activation energy, site reactivity and other thermodynamic properties.
Data that is difficult to obtain experimentally can be found using computational methods to model the mechanisms of catalytic cycles. Skilled computational chemists provide predictions that are close to experimental data with proper considerations of methods and basis sets. With good computational data, researchers can predict how catalysts can be improved to lower the cost and increase the efficiency of these reactions.
Drug development
Computational chemistry is used in drug development to model potentially useful drug molecules and help companies save time and cost in drug development. The drug discovery process involves analyzing data, finding ways to improve current molecules, finding synthetic routes, and testing those molecules. Computational chemistry helps with this process by giving predictions of which experiments would be best to do without conducting other experiments. Computational methods can also find values that are difficult to find experimentally like pKa's of compounds. Methods like density functional theory can be used to model drug molecules and find their properties, like their HOMO and LUMO energies and molecular orbitals. Computational chemists also help companies with developing informatics, infrastructure and designs of drugs.
Aside from drug synthesis, drug carriers are also researched by computational chemists for nanomaterials. It allows researchers to simulate environments to test the effectiveness and stability of drug carriers. Understanding how water interacts with these nanomaterials ensures stability of the material in human bodies. These computational simulations help researchers optimize the material find the best way to structure these nanomaterials before making them.
Computational chemistry databases
Databases are useful for both computational and non computational chemists in research and verifying the validity of computational methods. Empirical data is used to analyze the error of computational methods against experimental data. Empirical data helps researchers with their methods and basis sets to have greater confidence in the researchers results. Computational chemistry databases are also used in testing software or hardware for computational chemistry.
Databases can also use purely calculated data. Purely calculated data uses calculated values over experimental values for databases. Purely calculated data avoids dealing with these adjusting for different experimental conditions like zero-point energy. These calculations can also avoid experimental errors for difficult to test molecules. Though purely calculated data is often not perfect, identifying issues is often easier for calculated data than experimental.
Databases also give public access to information for researchers to use. They contain data that other researchers have found and uploaded to these databases so that anyone can search for them. Researchers use these databases to find information on molecules of interest and learn what can be done with those molecules. Some publicly available chemistry databases include the following.
BindingDB: Contains experimental information about protein-small molecule interactions.
RCSB: Stores publicly available 3D models of macromolecules (proteins, nucleic acids) and small molecules (drugs, inhibitors)
ChEMBL: Contains data from research on drug development such as assay results.
DrugBank: Data about mechanisms of drugs can be found here.
Methods
Ab initio method
The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation associated with the molecular Hamiltonian. Methods that do not include any empirical or semi-empirical parameters in their equations – being derived directly from theory, with no inclusion of experimental data – are called ab initio methods. A theoretical approximation is rigorously defined on first principles and then solved within an error margin that is qualitatively known beforehand. If numerical iterative methods must be used, the aim is to iterate until full machine accuracy is obtained (the best that is possible with a finite word length on the computer, and within the mathematical and/or physical approximations made).
Ab initio methods need to define a level of theory (the method) and a basis set. A basis set consists of functions centered on the molecule's atoms. These sets are then used to describe molecular orbitals via the linear combination of atomic orbitals (LCAO) molecular orbital method ansatz.
A common type of ab initio electronic structure calculation is the Hartree–Fock method (HF), an extension of molecular orbital theory, where electron-electron repulsions in the molecule are not specifically taken into account; only the electrons' average effect is included in the calculation. As the basis set size increases, the energy and wave function tend towards a limit called the Hartree–Fock limit.
Many types of calculations begin with a Hartree–Fock calculation and subsequently correct for electron-electron repulsion, referred to also as electronic correlation. These types of calculations are termed post-Hartree–Fock methods. By continually improving these methods, scientists can get increasingly closer to perfectly predicting the behavior of atomic and molecular systems under the framework of quantum mechanics, as defined by the Schrödinger equation. To obtain exact agreement with the experiment, it is necessary to include specific terms, some of which are far more important for heavy atoms than lighter ones.
In most cases, the Hartree–Fock wave function occupies a single configuration or determinant. In some cases, particularly for bond-breaking processes, this is inadequate, and several configurations must be used.
The total molecular energy can be evaluated as a function of the molecular geometry; in other words, the potential energy surface. Such a surface can be used for reaction dynamics. The stationary points of the surface lead to predictions of different isomers and the transition structures for conversion between isomers, but these can be determined without full knowledge of the complete surface.
Computational thermochemistry
A particularly important objective, called computational thermochemistry, is to calculate thermochemical quantities such as the enthalpy of formation to chemical accuracy. Chemical accuracy is the accuracy required to make realistic chemical predictions and is generally considered to be 1 kcal/mol or 4 kJ/mol. To reach that accuracy in an economic way, it is necessary to use a series of post-Hartree–Fock methods and combine the results. These methods are called quantum chemistry composite methods.
Chemical dynamics
After the electronic and nuclear variables are separated within the Born–Oppenheimer representation), the wave packet corresponding to the nuclear degrees of freedom is propagated via the time evolution operator (physics) associated to the time-dependent Schrödinger equation (for the full molecular Hamiltonian). In the complementary energy-dependent approach, the time-independent Schrödinger equation is solved using the scattering theory formalism. The potential representing the interatomic interaction is given by the potential energy surfaces. In general, the potential energy surfaces are coupled via the vibronic coupling terms.
The most popular methods for propagating the wave packet associated to the molecular geometry are:
the Chebyshev (real) polynomial,
the multi-configuration time-dependent Hartree method (MCTDH),
the semiclassical method
and the split operator technique explained below.
Split operator technique
How a computational method solves quantum equations impacts the accuracy and efficiency of the method. The split operator technique is one of these methods for solving differential equations. In computational chemistry, split operator technique reduces computational costs of simulating chemical systems. Computational costs are about how much time it takes for computers to calculate these chemical systems, as it can take days for more complex systems. Quantum systems are difficult and time-consuming to solve for humans. Split operator methods help computers calculate these systems quickly by solving the sub problems in a quantum differential equation. The method does this by separating the differential equation into two different equations, like when there are more than two operators. Once solved, the split equations are combined into one equation again to give an easily calculable solution.
This method is used in many fields that require solving differential equations, such as biology. However, the technique comes with a splitting error. For example, with the following solution for a differential equation.
The equation can be split, but the solutions will not be exact, only similar. This is an example of first order splitting.
There are ways to reduce this error, which include taking an average of two split equations.
Another way to increase accuracy is to use higher order splitting. Usually, second order splitting is the most that is done because higher order splitting requires much more time to calculate and is not worth the cost. Higher order methods become too difficult to implement, and are not useful for solving differential equations despite the higher accuracy.
Computational chemists spend much time making systems calculated with split operator technique more accurate while minimizing the computational cost. Calculating methods is a massive challenge for many chemists trying to simulate molecules or chemical environments.
Density functional methods
Density functional theory (DFT) methods are often considered to be ab initio methods for determining the molecular electronic structure, even though many of the most common functionals use parameters derived from empirical data, or from more complex calculations. In DFT, the total energy is expressed in terms of the total one-electron density rather than the wave function. In this type of calculation, there is an approximate Hamiltonian and an approximate expression for the total electron density. DFT methods can be very accurate for little computational cost. Some methods combine the density functional exchange functional with the Hartree–Fock exchange term and are termed hybrid functional methods.
Semi-empirical methods
Semi-empirical quantum chemistry methods are based on the Hartree–Fock method formalism, but make many approximations and obtain some parameters from empirical data. They were very important in computational chemistry from the 60s to the 90s, especially for treating large molecules where the full Hartree–Fock method without the approximations were too costly. The use of empirical parameters appears to allow some inclusion of correlation effects into the methods.
Primitive semi-empirical methods were designed even before, where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel, and for all valence electron systems, the extended Hückel method proposed by Roald Hoffmann. Sometimes, Hückel methods are referred to as "completely empirical" because they do not derive from a Hamiltonian. Yet, the term "empirical methods", or "empirical force fields" is usually used to describe molecular mechanics.
Molecular mechanics
In many cases, large molecular systems can be modeled successfully while avoiding quantum mechanical calculations entirely. Molecular mechanics simulations, for example, use one classical expression for the energy of a compound, for instance, the harmonic oscillator. All constants appearing in the equations must be obtained beforehand from experimental data or ab initio calculations.
The database of compounds used for parameterization, i.e. the resulting set of parameters and functions is called the force field, is crucial to the success of molecular mechanics calculations. A force field parameterized against a specific class of molecules, for instance, proteins, would be expected to only have any relevance when describing other molecules of the same class. These methods can be applied to proteins and other large biological molecules, and allow studies of the approach and interaction (docking) of potential drug molecules.
Molecular dynamics
Molecular dynamics (MD) use either quantum mechanics, molecular mechanics or a mixture of both to calculate forces which are then used to solve Newton's laws of motion to examine the time-dependent behavior of systems. The result of a molecular dynamics simulation is a trajectory that describes how the position and velocity of particles varies with time. The phase point of a system described by the positions and momenta of all its particles on a previous time point will determine the next phase point in time by integrating over Newton's laws of motion.
Monte Carlo
Monte Carlo (MC) generates configurations of a system by making random changes to the positions of its particles, together with their orientations and conformations where appropriate. It is a random sampling method, which makes use of the so-called importance sampling. Importance sampling methods are able to generate low energy states, as this enables properties to be calculated accurately. The potential energy of each configuration of the system can be calculated, together with the values of other properties, from the positions of the atoms.
Quantum mechanics/molecular mechanics (QM/MM)
QM/MM is a hybrid method that attempts to combine the accuracy of quantum mechanics with the speed of molecular mechanics. It is useful for simulating very large molecules such as enzymes.
Quantum Computational Chemistry
Quantum computational chemistry aims to exploit quantum computing to simulate chemical systems, distinguishing itself from the QM/MM (Quantum Mechanics/Molecular Mechanics) approach. While QM/MM uses a hybrid approach, combining quantum mechanics for a portion of the system with classical mechanics for the remainder, quantum computational chemistry exclusively uses quantum computing methods to represent and process information, such as Hamiltonian operators.
Conventional computational chemistry methods often struggle with the complex quantum mechanical equations, particularly due to the exponential growth of a quantum system's wave function. Quantum computational chemistry addresses these challenges using quantum computing methods, such as qubitization and quantum phase estimation, which are believed to offer scalable solutions.
Qubitization involves adapting the Hamiltonian operator for more efficient processing on quantum computers, enhancing the simulation's efficiency. Quantum phase estimation, on the other hand, assists in accurately determining energy eigenstates, which are critical for understanding the quantum system's behavior.
While these techniques have advanced the field of computational chemistry, especially in the simulation of chemical systems, their practical application is currently limited mainly to smaller systems due to technological constraints. Nevertheless, these developments may lead to significant progress towards achieving more precise and resource-efficient quantum chemistry simulations.
Computational costs in chemistry algorithms
The computational cost and algorithmic complexity in chemistry are used to help understand and predict chemical phenomena. They help determine which algorithms/computational methods to use when solving chemical problems. This section focuses on the scaling of computational complexity with molecule size and details the algorithms commonly used in both domains.
In quantum chemistry, particularly, the complexity can grow exponentially with the number of electrons involved in the system. This exponential growth is a significant barrier to simulating large or complex systems accurately.
Advanced algorithms in both fields strive to balance accuracy with computational efficiency. For instance, in MD, methods like Verlet integration or Beeman's algorithm are employed for their computational efficiency. In quantum chemistry, hybrid methods combining different computational approaches (like QM/MM) are increasingly used to tackle large biomolecular systems.
Algorithmic complexity examples
The following list illustrates the impact of computational complexity on algorithms used in chemical computations. It is important to note that while this list provides key examples, it is not comprehensive and serves as a guide to understanding how computational demands influence the selection of specific computational methods in chemistry.
Molecular dynamics
Algorithm
Solves Newton's equations of motion for atoms and molecules.
Complexity
The standard pairwise interaction calculation in MD leads to an complexity for particles. This is because each particle interacts with every other particle, resulting in interactions. Advanced algorithms, such as the Ewald summation or Fast Multipole Method, reduce this to or even by grouping distant particles and treating them as a single entity or using clever mathematical approximations.
Quantum mechanics/molecular mechanics (QM/MM)
Algorithm
Combines quantum mechanical calculations for a small region with molecular mechanics for the larger environment.
Complexity
The complexity of QM/MM methods depends on both the size of the quantum region and the method used for quantum calculations. For example, if a Hartree-Fock method is used for the quantum part, the complexity can be approximated as , where is the number of basis functions in the quantum region. This complexity arises from the need to solve a set of coupled equations iteratively until self-consistency is achieved.
Hartree-Fock method
Algorithm
Finds a single Fock state that minimizes the energy.
Complexity
NP-hard or NP-complete as demonstrated by embedding instances of the Ising model into Hartree-Fock calculations. The Hartree-Fock method involves solving the Roothaan-Hall equations, which scales as to depending on implementation, with being the number of basis functions. The computational cost mainly comes from evaluating and transforming the two-electron integrals. This proof of NP-hardness or NP-completeness comes from embedding problems like the Ising model into the Hartree-Fock formalism.
Density functional theory
Algorithm
Investigates the electronic structure or nuclear structure of many-body systems such as atoms, molecules, and the condensed phases.
Complexity
Traditional implementations of DFT typically scale as , mainly due to the need to diagonalize the Kohn-Sham matrix. The diagonalization step, which finds the eigenvalues and eigenvectors of the matrix, contributes most to this scaling. Recent advances in DFT aim to reduce this complexity through various approximations and algorithmic improvements.
Standard CCSD and CCSD(T) method
Algorithm
CCSD and CCSD(T) methods are advanced electronic structure techniques involving single, double, and in the case of CCSD(T), perturbative triple excitations for calculating electronic correlation effects.
Complexity
CCSD
Scales as where is the number of basis functions. This intense computational demand arises from the inclusion of single and double excitations in the electron correlation calculation.
CCSD(T)
With the addition of perturbative triples, the complexity increases to . This elevated complexity restricts practical usage to smaller systems, typically up to 20-25 atoms in conventional implementations.
Linear-scaling CCSD(T) method
Algorithm
An adaptation of the standard CCSD(T) method using local natural orbitals (NOs) to significantly reduce the computational burden and enable application to larger systems.
Complexity
Achieves linear scaling with the system size, a major improvement over the traditional fifth-power scaling of CCSD. This advancement allows for practical applications to molecules of up to 100 atoms with reasonable basis sets, marking a significant step forward in computational chemistry's capability to handle larger systems with high accuracy.
Proving the complexity classes for algorithms involves a combination of mathematical proof and computational experiments. For example, in the case of the Hartree-Fock method, the proof of NP-hardness is a theoretical result derived from complexity theory, specifically through reductions from known NP-hard problems.
For other methods like MD or DFT, the computational complexity is often empirically observed and supported by algorithm analysis. In these cases, the proof of correctness is less about formal mathematical proofs and more about consistently observing the computational behaviour across various systems and implementations.
Accuracy
Computational chemistry is not an exact description of real-life chemistry, as the mathematical and physical models of nature can only provide an approximation. However, the majority of chemical phenomena can be described to a certain degree in a qualitative or approximate quantitative computational scheme.
Molecules consist of nuclei and electrons, so the methods of quantum mechanics apply. Computational chemists often attempt to solve the non-relativistic Schrödinger equation, with relativistic corrections added, although some progress has been made in solving the fully relativistic Dirac equation. In principle, it is possible to solve the Schrödinger equation in either its time-dependent or time-independent form, as appropriate for the problem in hand; in practice, this is not possible except for very small systems. Therefore, a great number of approximate methods strive to achieve the best trade-off between accuracy and computational cost.
Accuracy can always be improved with greater computational cost. Significant errors can present themselves in ab initio models comprising many electrons, due to the computational cost of full relativistic-inclusive methods. This complicates the study of molecules interacting with high atomic mass unit atoms, such as transitional metals and their catalytic properties. Present algorithms in computational chemistry can routinely calculate the properties of small molecules that contain up to about 40 electrons with errors for energies less than a few kJ/mol. For geometries, bond lengths can be predicted within a few picometers and bond angles within 0.5 degrees. The treatment of larger molecules that contain a few dozen atoms is computationally tractable by more approximate methods such as density functional theory (DFT).
There is some dispute within the field whether or not the latter methods are sufficient to describe complex chemical reactions, such as those in biochemistry. Large molecules can be studied by semi-empirical approximate methods. Even larger molecules are treated by classical mechanics methods that use what are called molecular mechanics (MM).In QM-MM methods, small parts of large complexes are treated quantum mechanically (QM), and the remainder is treated approximately (MM).
Software packages
Many self-sufficient computational chemistry software packages exist. Some include many methods covering a wide range, while others concentrate on a very specific range or even on one method. Details of most of them can be found in:
Biomolecular modelling programs: proteins, nucleic acid.
Molecular mechanics programs.
Quantum chemistry and solid state-physics software supporting several methods.
Molecular design software
Semi-empirical programs.
Valence bond programs.
Specialized journals on computational chemistry
Annual Reports in Computational Chemistry
Computational and Theoretical Chemistry
Computational and Theoretical Polymer Science
Computers & Chemical Engineering
Journal of Chemical Information and Modeling
Journal of Chemical Software
Journal of Chemical Theory and Computation
Journal of Cheminformatics
Journal of Computational Chemistry
Journal of Computer Aided Chemistry
Journal of Computer Chemistry Japan
Journal of Computer-aided Molecular Design
Journal of Theoretical and Computational Chemistry
Molecular Informatics
Theoretical Chemistry Accounts
External links
NIST Computational Chemistry Comparison and Benchmark DataBase – Contains a database of thousands of computational and experimental results for hundreds of systems
American Chemical Society Division of Computers in Chemistry – American Chemical Society Computers in Chemistry Division, resources for grants, awards, contacts and meetings.
CSTB report Mathematical Research in Materials Science: Opportunities and Perspectives – CSTB Report
3.320 Atomistic Computer Modeling of Materials (SMA 5107) Free MIT Course
Chem 4021/8021 Computational Chemistry Free University of Minnesota Course
Technology Roadmap for Computational Chemistry
Applications of molecular and materials modelling.
Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology CSTB Report
MD and Computational Chemistry applications on GPUs
Susi Lehtola, Antti J. Karttunen:"Free and open source software for computational chemistry education", First published: 23 March 2022, https://doi.org/10.1002/wcms.1610 (Open Access)
CCL.NET: Computational Chemistry List, Ltd.
See also
References
Computational fields of study
Theoretical chemistry
Physical chemistry
Chemical physics
Computational physics | Computational chemistry | [
"Physics",
"Chemistry",
"Technology"
] | 5,910 | [
"Computational fields of study",
"Applied and interdisciplinary physics",
"Computational physics",
"Theoretical chemistry",
"Computational chemistry",
"Computing and society",
"nan",
"Physical chemistry",
"Chemical physics"
] |
6,026 | https://en.wikipedia.org/wiki/Countable%20set | In mathematics, a set is countable if either it is finite or it can be made in one to one correspondence with the set of natural numbers. Equivalently, a set is countable if there exists an injective function from it into the natural numbers; this means that each element in the set may be associated to a unique natural number, or that the elements of the set can be counted one at a time, although the counting may never finish due to an infinite number of elements.
In more technical terms, assuming the axiom of countable choice, a set is countable if its cardinality (the number of elements of the set) is not greater than that of the natural numbers. A countable set that is not finite is said to be countably infinite.
The concept is attributed to Georg Cantor, who proved the existence of uncountable sets, that is, sets that are not countable; for example the set of the real numbers.
A note on terminology
Although the terms "countable" and "countably infinite" as defined here are quite common, the terminology is not universal. An alternative style uses countable to mean what is here called countably infinite, and at most countable to mean what is here called countable.
The terms enumerable and denumerable may also be used, e.g. referring to countable and countably infinite respectively, definitions vary and care is needed respecting the difference with recursively enumerable.
Definition
A set is countable if:
Its cardinality is less than or equal to (aleph-null), the cardinality of the set of natural numbers .
There exists an injective function from to .
is empty or there exists a surjective function from to .
There exists a bijective mapping between and a subset of .
is either finite () or countably infinite.
All of these definitions are equivalent.
A set is countably infinite if:
Its cardinality is exactly .
There is an injective and surjective (and therefore bijective) mapping between and .
has a one-to-one correspondence with .
The elements of can be arranged in an infinite sequence , where is distinct from for and every element of is listed.
A set is uncountable if it is not countable, i.e. its cardinality is greater than .
History
In 1874, in his first set theory article, Cantor proved that the set of real numbers is uncountable, thus showing that not all infinite sets are countable. In 1878, he used one-to-one correspondences to define and compare cardinalities. In 1883, he extended the natural numbers with his infinite ordinals, and used sets of ordinals to produce an infinity of sets having different infinite cardinalities.
Introduction
A set is a collection of elements, and may be described in many ways. One way is simply to list all of its elements; for example, the set consisting of the integers 3, 4, and 5 may be denoted , called roster form. This is only effective for small sets, however; for larger sets, this would be time-consuming and error-prone. Instead of listing every single element, sometimes an ellipsis ("...") is used to represent many elements between the starting element and the end element in a set, if the writer believes that the reader can easily guess what ... represents; for example, presumably denotes the set of integers from 1 to 100. Even in this case, however, it is still possible to list all the elements, because the number of elements in the set is finite. If we number the elements of the set 1, 2, and so on, up to , this gives us the usual definition of "sets of size ".
Some sets are infinite; these sets have more than elements where is any integer that can be specified. (No matter how large the specified integer is, such as , infinite sets have more than elements.) For example, the set of natural numbers, denotable by , has infinitely many elements, and we cannot use any natural number to give its size. It might seem natural to divide the sets into different classes: put all the sets containing one element together; all the sets containing two elements together; ...; finally, put together all infinite sets and consider them as having the same size. This view works well for countably infinite sets and was the prevailing assumption before Georg Cantor's work. For example, there are infinitely many odd integers, infinitely many even integers, and also infinitely many integers overall. We can consider all these sets to have the same "size" because we can arrange things such that, for every integer, there is a distinct even integer:
or, more generally, (see picture). What we have done here is arrange the integers and the even integers into a one-to-one correspondence (or bijection), which is a function that maps between two sets such that each element of each set corresponds to a single element in the other set. This mathematical notion of "size", cardinality, is that two sets are of the same size if and only if there is a bijection between them. We call all sets that are in one-to-one correspondence with the integers countably infinite and say they have cardinality .
Georg Cantor showed that not all infinite sets are countably infinite. For example, the real numbers cannot be put into one-to-one correspondence with the natural numbers (non-negative integers). The set of real numbers has a greater cardinality than the set of natural numbers and is said to be uncountable.
Formal overview
By definition, a set is countable if there exists a bijection between and a subset of the natural numbers . For example, define the correspondence
Since every element of is paired with precisely one element of , and vice versa, this defines a bijection, and shows that is countable. Similarly we can show all finite sets are countable.
As for the case of infinite sets, a set is countably infinite if there is a bijection between and all of . As examples, consider the sets , the set of positive integers, and , the set of even integers. We can show these sets are countably infinite by exhibiting a bijection to the natural numbers. This can be achieved using the assignments and , so that
Every countably infinite set is countable, and every infinite countable set is countably infinite. Furthermore, any subset of the natural numbers is countable, and more generally:
The set of all ordered pairs of natural numbers (the Cartesian product of two sets of natural numbers, is countably infinite, as can be seen by following a path like the one in the picture: The resulting mapping proceeds as follows:
This mapping covers all such ordered pairs.
This form of triangular mapping recursively generalizes to -tuples of natural numbers, i.e., where and are natural numbers, by repeatedly mapping the first two elements of an -tuple to a natural number. For example, can be written as . Then maps to 5 so maps to , then maps to 39. Since a different 2-tuple, that is a pair such as , maps to a different natural number, a difference between two n-tuples by a single element is enough to ensure the n-tuples being mapped to different natural numbers. So, an injection from the set of -tuples to the set of natural numbers is proved. For the set of -tuples made by the Cartesian product of finitely many different sets, each element in each tuple has the correspondence to a natural number, so every tuple can be written in natural numbers then the same logic is applied to prove the theorem.
The set of all integers and the set of all rational numbers may intuitively seem much bigger than . But looks can be deceiving. If a pair is treated as the numerator and denominator of a vulgar fraction (a fraction in the form of where and are integers), then for every positive fraction, we can come up with a distinct natural number corresponding to it. This representation also includes the natural numbers, since every natural number is also a fraction . So we can conclude that there are exactly as many positive rational numbers as there are positive integers. This is also true for all rational numbers, as can be seen below.
In a similar manner, the set of algebraic numbers is countable.
Sometimes more than one mapping is useful: a set to be shown as countable is one-to-one mapped (injection) to another set , then is proved as countable if is one-to-one mapped to the set of natural numbers. For example, the set of positive rational numbers can easily be one-to-one mapped to the set of natural number pairs (2-tuples) because maps to . Since the set of natural number pairs is one-to-one mapped (actually one-to-one correspondence or bijection) to the set of natural numbers as shown above, the positive rational number set is proved as countable.
With the foresight of knowing that there are uncountable sets, we can wonder whether or not this last result can be pushed any further. The answer is "yes" and "no", we can extend it, but we need to assume a new axiom to do so.
For example, given countable sets , we first assign each element of each set a tuple, then we assign each tuple an index using a variant of the triangular enumeration we saw above:
We need the axiom of countable choice to index all the sets simultaneously.
This set is the union of the length-1 sequences, the length-2 sequences, the length-3 sequences, each of which is a countable set (finite Cartesian product). So we are talking about a countable union of countable sets, which is countable by the previous theorem.
The elements of any finite subset can be ordered into a finite sequence. There are only countably many finite sequences, so also there are only countably many finite subsets.
These follow from the definitions of countable set as injective / surjective functions.
Cantor's theorem asserts that if is a set and is its power set, i.e. the set of all subsets of , then there is no surjective function from to . A proof is given in the article Cantor's theorem. As an immediate consequence of this and the Basic Theorem above we have:
For an elaboration of this result see Cantor's diagonal argument.
The set of real numbers is uncountable, and so is the set of all infinite sequences of natural numbers.
Minimal model of set theory is countable
If there is a set that is a standard model (see inner model) of ZFC set theory, then there is a minimal standard model (see Constructible universe). The Löwenheim–Skolem theorem can be used to show that this minimal model is countable. The fact that the notion of "uncountability" makes sense even in this model, and in particular that this model M contains elements that are:
subsets of M, hence countable,
but uncountable from the point of view of M,
was seen as paradoxical in the early days of set theory; see Skolem's paradox for more.
The minimal standard model includes all the algebraic numbers and all effectively computable transcendental numbers, as well as many other kinds of numbers.
Total orders
Countable sets can be totally ordered in various ways, for example:
Well-orders (see also ordinal number):
The usual order of natural numbers (0, 1, 2, 3, 4, 5, ...)
The integers in the order (0, 1, 2, 3, ...; −1, −2, −3, ...)
Other (not well orders):
The usual order of integers (..., −3, −2, −1, 0, 1, 2, 3, ...)
The usual order of rational numbers (Cannot be explicitly written as an ordered list!)
In both examples of well orders here, any subset has a least element; and in both examples of non-well orders, some subsets do not have a least element.
This is the key definition that determines whether a total order is also a well order.
See also
Aleph number
Counting
Hilbert's paradox of the Grand Hotel
Uncountable set
Notes
Citations
References
Reprinted by Springer-Verlag, New York, 1974. (Springer-Verlag edition). Reprinted by Martino Fine Books, 2011. (Paperback edition).
Basic concepts in infinite set theory
Cardinal numbers
Infinity | Countable set | [
"Mathematics"
] | 2,629 | [
"Cardinal numbers",
"Basic concepts in infinite set theory",
"Mathematical objects",
"Infinity",
"Basic concepts in set theory",
"Numbers"
] |
6,034 | https://en.wikipedia.org/wiki/Cahn%E2%80%93Ingold%E2%80%93Prelog%20priority%20rules | In organic chemistry, the Cahn–Ingold–Prelog (CIP) sequence rules (also the CIP priority convention; named after Robert Sidney Cahn, Christopher Kelk Ingold, and Vladimir Prelog) are a standard process to completely and unequivocally name a stereoisomer of a molecule. The purpose of the CIP system is to assign an R or S descriptor to each stereocenter and an E or Z descriptor to each double bond so that the configuration of the entire molecule can be specified uniquely by including the descriptors in its systematic name. A molecule may contain any number of stereocenters and any number of double bonds, and each usually gives rise to two possible isomers. A molecule with an integer describing the number of stereocenters will usually have stereoisomers, and diastereomers each having an associated pair of enantiomers. The CIP sequence rules contribute to the precise naming of every stereoisomer of every organic molecule with all atoms of ligancy of fewer than 4 (but including ligancy of 6 as well, this term referring to the "number of neighboring atoms" bonded to a center).
The key article setting out the CIP sequence rules was published in 1966, and was followed by further refinements, before it was incorporated into the rules of the International Union of Pure and Applied Chemistry (IUPAC), the official body that defines organic nomenclature, in 1974. The rules have since been revised, most recently in 2013, as part of the IUPAC book Nomenclature of Organic Chemistry. The IUPAC presentation of the rules constitute the official, formal standard for their use, and it notes that "the method has been developed to cover all compounds with ligancy up to 4... and… [extended to the case of] ligancy 6… [as well as] for all configurations and conformations of such compounds." Nevertheless, though the IUPAC documentation presents a thorough introduction, it includes the caution that "it is essential to study the original papers, especially the 1966 paper, before using the sequence rule for other than fairly simple cases."
A recent paper argues for changes to some of the rules (sequence rules 1b and 2) to address certain molecules for which the correct descriptors were unclear. However, a different problem remains: in rare cases, two different stereoisomers of the same molecule can have the same CIP descriptors, so the CIP system may not be able to unambiguously name a stereoisomer, and other systems may be preferable.
Steps for naming
The steps for naming molecules using the CIP system are often presented as:
Identification of stereocenters and double bonds;
Assignment of priorities to the groups attached to each stereocenter or double-bonded atom; and
Assignment of R/S and E/Z descriptors.
Assignment of priorities
R/S and E/Z descriptors are assigned by using a system for ranking priority of the groups attached to each stereocenter. This procedure, often known as the sequence rules, is the heart of the CIP system. The overview in this section omits some rules that are needed only in rare cases.
Compare the atomic number (Z) of the atoms directly attached to the stereocenter; the group having the atom of higher atomic number Z receives higher priority (i.e. number 1).
If there is a tie, the atoms at distance 2 from the stereocenter have to be considered: a list is made for each group of further atoms bonded to the one directly attached to the stereocenter. Each list is arranged in order of decreasing atomic number Z. Then the lists are compared atom by atom; at the earliest difference, the group containing the atom of higher atomic number Z receives higher priority.
If there is still a tie, each atom in each of the two lists is replaced with a sublist of the other atoms bonded to it (at distance 3 from the stereocenter), the sublists are arranged in decreasing order of atomic number Z, and the entire structure is again compared atom by atom. This process is repeated recursively, each time with atoms one bond farther from the stereocenter, until the tie is broken.
Isotopes
If two groups differ only in isotopes, then the larger atomic mass is used to set the priority.
Double and triple bonds
If an atom, A, is double-bonded to another atom, then atom A should be treated as though it is "connected to the same atom twice". An atom that is double-bonded has a higher priority than an atom that is single bonded. When dealing with double bonded priority groups, one is allowed to visit the same atom twice as one creates an arc.
When B is replaced with a list of attached atoms, A itself, but not its "phantom", is excluded in accordance with the general principle of not doubling back along a bond that has just been followed. A triple bond is handled the same way except that A and B are each connected to two phantom atoms of the other.
Geometrical isomers
If two substituents on an atom are geometric isomers of each other, the Z-isomer has higher priority than the E-isomer. A stereoisomer that contains two higher priority groups on the same face of the double bond (cis) is classified as "Z." The stereoisomer with two higher priority groups on opposite sides of a carbon-carbon double bond (trans) is classified as "E."
Cyclic molecules
To handle a molecule containing one or more cycles, one must first expand it into a tree (called a hierarchical digraph) by traversing bonds in all possible paths starting at the stereocenter. When the traversal encounters an atom through which the current path has already passed, a phantom atom is generated in order to keep the tree finite. A single atom of the original molecule may appear in many places (some as phantoms, some not) in the tree.
Assigning descriptors
Stereocenters: R/S
A chiral sp3 hybridized isomer contains four different substituents. All four substituents are assigned prorites based on its atomic numbers. After the substituents of a stereocenter have been assigned their priorities, the molecule is oriented in space so that the group with the lowest priority is pointed away from the observer. If the substituents are numbered from 1 (highest priority) to 4 (lowest priority), then the sense of rotation of a curve passing through 1, 2 and 3 distinguishes the stereoisomers. In a configurational isomer, the lowest priority group (most times hydrogen) is positioned behind the plane or the hatched bond going away from the reader. The highest priority group will have an arc drawn connecting to the rest of the groups, finishing at the group of third priority. An arc drawn clockwise, has the rectus (R) assignment. An arc drawn counterclockwise, has the sinister (S) assignment. The names are derived from the Latin for 'right' and 'left', respectively. When naming an organic isomer, the abbreviation for either rectus or sinister assignment is placed in front of the name in parentheses. For example, 3-methyl-1-pentene with a rectus assignment is formatted as (R)-3-methyl-1-pentene.
A practical method of determining whether an enantiomer is R or S is by using the right-hand rule: one wraps the molecule with the fingers in the direction . If the thumb points in the direction of the fourth substituent, the enantiomer is R; otherwise, it is S.
It is possible in rare cases that two substituents on an atom differ only in their absolute configuration (R or S). If the relative priorities of these substituents need to be established, R takes priority over S. When this happens, the descriptor of the stereocenter is a lowercase letter (r or s) instead of the uppercase letter normally used.
Double bonds: E/Z
For double bonded molecules, Cahn–Ingold–Prelog priority rules (CIP rules) are followed to determine the priority of substituents of the double bond. If both of the high priority groups are on the same side of the double bond (cis configuration), then the stereoisomer is assigned the configuration Z (zusammen, German word meaning "together"). If the high priority groups are on opposite sides of the double bond (trans configuration), then the stereoisomer is assigned the configuration E (entgegen, German word meaning "opposed")
Coordination compounds
In some cases where stereogenic centers are formed, the configuration must be specified. Without the presence of a non-covalent interaction, a compound is achiral. Some professionals have proposed a new rule to account for this. This rule states that "non-covalent interactions have a fictitious number between 0 and 1" when assigning priority. Compounds in which this occurs are referred to as coordination compounds.
Spiro compounds
Some spiro compounds, for example the SDP ligands ((R)- and (S)-7,7'-bis(diphenylphosphaneyl)-2,2',3,3'-tetrahydro-1,1'-spirobi[indene]), represent chiral, C2-symmetrical molecules where the rings lie approximately at right angles to each other and each molecule cannot be superposed on its mirror image. The spiro carbon, C, is a stereogenic centre, and priority can be assigned a>a′>b>b′, in which one ring (both give the same answer) contains atoms a and b adjacent to the spiro carbon, and the other contains a′ and b′. The configuration at C may then be assigned as for any other stereocentre.
Examples
The following are examples of application of the nomenclature.
{| align="center" class="wikitable" width=800px
|-
!colspan="2"|R/S assignments for several compounds
|-
|bgcolor="#FFFFFF"|
|bgcolor="#FFFFFF" valign=top| The hypothetical molecule bromochlorofluoroiodomethane shown in its (R)-configuration would be a very simple chiral compound. The priorities are assigned based on atomic number (Z): iodine (Z = 53) > bromine (Z = 35) > chlorine (Z = 17) > fluorine (Z = 9). Allowing fluorine (lowest priority, number 4) to point away from the viewer the rotation is clockwise hence the R assignment.
|-
|bgcolor="#FFFFFF"|
| bgcolor="#FFFFFF" valign="top" | In the assignment of L-serine highest priority (i.e. number 1) is given to the nitrogen atom (Z = 7) in the amino group (NH2). Both the hydroxymethyl group (CH2OH) and the carboxylic acid group (COOH) have carbon atoms (Z = 6) but priority is given to the latter because the carbon atom in the COOH group is connected to a second oxygen (Z = 8) whereas in the CH2OH group carbon is connected to a hydrogen atom (Z = 1). Lowest priority (i.e. number 4) is given to the hydrogen atom and as this atom points away from the viewer, the counterclockwise decrease in priority over the three remaining substituents completes the assignment as S.
|-
|bgcolor="#FFFFFF"|
|bgcolor="#FFFFFF" valign=top| The stereocenter in (S)-carvone is connected to one hydrogen atom (not shown, priority 4) and three carbon atoms. The isopropenyl group has priority 1 (carbon atoms only), and for the two remaining carbon atoms, priority is decided with the carbon atoms two bonds removed from the stereocenter, one part of the keto group (O, O, C, priority number 2) and one part of an alkene (C, C, H, priority number 3). The resulting counterclockwise rotation results in S.
|}
Describing multiple centers
If a compound has more than one chiral stereocenter, each center is denoted by either R or S. For example, ephedrine exists in (1R,2S) and (1S,2R) stereoisomers, which are distinct mirror-image forms of each other, making them enantiomers. This compound also exists as the two enantiomers written (1R,2R) and (1S,2S), which are named pseudoephedrine rather than ephedrine. All four of these isomers are named 2-methylamino-1-phenyl-1-propanol in systematic nomenclature. However, ephedrine and pseudoephedrine are diastereomers, or stereoisomers that are not enantiomers because they are not related as mirror-image copies. Pseudoephedrine and ephedrine are given different names because, as diastereomers, they have different chemical properties, even for racemic mixtures of each.
More generally, for any pair of enantiomers, all of the descriptors are opposite: (R,R) and (S,S) are enantiomers, as are (R,S) and (S,R). Diastereomers have at least one descriptor in common; for example (R,S) and (R,R) are diastereomers, as are (S,R) and (S,S). This holds true also for compounds having more than two stereocenters: if two stereoisomers have at least one descriptor in common, they are diastereomers. If all the descriptors are opposite, they are enantiomers.
A meso compound is an achiral molecule, despite having two or more stereogenic centers. A meso compound is superposable on its mirror image, therefore it reduces the number of stereoisomers predicted by the 2n rule. This occurs because the molecule obtains a plane of symmetry that causes the molecule to rotate around the central carbon–carbon bond. One example is meso-tartaric acid, in which (R,S) is the same as the (S,R) form. In meso compounds the R and S stereocenters occur in symmetrically positioned pairs.
Relative configuration
The relative configuration of two stereoisomers may be denoted by the descriptors R and S with an asterisk (*). (R*,R*) means two centers having identical configurations, (R,R) or (S,S); (R*,S*) means two centers having opposite configurations, (R,S) or (S,R). To begin, the lowest-numbered (according to IUPAC systematic numbering) stereogenic center is given the R* descriptor.
To designate two anomers the relative stereodescriptors alpha (α) and beta (β) are used. In the α anomer the anomeric carbon atom and the reference atom do have opposite configurations (R,S) or (S,R), whereas in the β anomer they are the same (R,R) or (S,S).
Faces
Stereochemistry also plays a role assigning faces to trigonal molecules such as ketones. A nucleophile in a nucleophilic addition can approach the carbonyl group from two opposite sides or faces. When an achiral nucleophile attacks acetone, both faces are identical and there is only one reaction product. When the nucleophile attacks butanone, the faces are not identical (enantiotopic) and a racemic product results. When the nucleophile is a chiral molecule diastereoisomers are formed. When one face of a molecule is shielded by substituents or geometric constraints compared to the other face the faces are called diastereotopic. The same rules that determine the stereochemistry of a stereocenter (R or S) also apply when assigning the face of a molecular group. The faces are then called the Re-face and Si-face. In the example displayed on the right, the compound acetophenone is viewed from the Re-face. Hydride addition as in a reduction process from this side will form the (S)-enantiomer and attack from the opposite Si-face will give the (R)-enantiomer. However, one should note that adding a chemical group to the prochiral center from the Re-face will not always lead to an (S)-stereocenter, as the priority of the chemical group has to be taken into account. That is, the absolute stereochemistry of the product is determined on its own and not by considering which face it was attacked from. In the above-mentioned example, if chloride (Z = 17) were added to the prochiral center from the Re-face, this would result in an (R)-enantiomer.
See also
Chirality (chemistry)
Descriptor (chemistry)
E–Z notation
Isomer
Stereochemistry
References
Chemical nomenclature
Eponymous chemical rules
Stereochemistry | Cahn–Ingold–Prelog priority rules | [
"Physics",
"Chemistry"
] | 3,679 | [
"Spacetime",
"Stereochemistry",
"Space",
"nan"
] |
6,038 | https://en.wikipedia.org/wiki/Chemical%20engineering | Chemical engineering is an engineering field which deals with the study of the operation and design of chemical plants as well as methods of improving production. Chemical engineers develop economical commercial processes to convert raw materials into useful products. Chemical engineering uses principles of chemistry, physics, mathematics, biology, and economics to efficiently use, produce, design, transport and transform energy and materials. The work of chemical engineers can range from the utilization of nanotechnology and nanomaterials in the laboratory to large-scale industrial processes that convert chemicals, raw materials, living cells, microorganisms, and energy into useful forms and products. Chemical engineers are involved in many aspects of plant design and operation, including safety and hazard assessments, process design and analysis, modeling, control engineering, chemical reaction engineering, nuclear engineering, biological engineering, construction specification, and operating instructions.
Chemical engineers typically hold a degree in Chemical Engineering or Process Engineering. Practicing engineers may have professional certification and be accredited members of a professional body. Such bodies include the Institution of Chemical Engineers (IChemE) or the American Institute of Chemical Engineers (AIChE). A degree in chemical engineering is directly linked with all of the other engineering disciplines, to various extents.
Etymology
A 1996 article cites James F. Donnelly for mentioning an 1839 reference to chemical engineering in relation to the production of sulfuric acid. In the same paper, however, George E. Davis, an English consultant, was credited with having coined the term. Davis also tried to found a Society of Chemical Engineering, but instead, it was named the Society of Chemical Industry (1881), with Davis as its first secretary. The History of Science in United States: An Encyclopedia puts the use of the term around 1890. "Chemical engineering", describing the use of mechanical equipment in the chemical industry, became common vocabulary in England after 1850. By 1910, the profession, "chemical engineer," was already in common use in Britain and the United States.
History
New concepts and innovations
In the 1940s, it became clear that unit operations alone were insufficient in developing chemical reactors. While the predominance of unit operations in chemical engineering courses in Britain and the United States continued until the 1960s, transport phenomena started to receive greater focus. Along with other novel concepts, such as process systems engineering (PSE), a "second paradigm" was defined. Transport phenomena gave an analytical approach to chemical engineering while PSE focused on its synthetic elements, such as those of a control system and process design. Developments in chemical engineering before and after World War II were mainly incited by the petrochemical industry; however, advances in other fields were made as well. Advancements in biochemical engineering in the 1940s, for example, found application in the pharmaceutical industry, and allowed for the mass production of various antibiotics, including penicillin and streptomycin. Meanwhile, progress in polymer science in the 1950s paved way for the "age of plastics".
Safety and hazard developments
Concerns regarding large-scale chemical manufacturing facilities' safety and environmental impact were also raised during this period. Silent Spring, published in 1962, alerted its readers to the harmful effects of DDT, a potent insecticide. The 1974 Flixborough disaster in the United Kingdom resulted in 28 deaths, as well as damage to a chemical plant and three nearby villages. 1984 Bhopal disaster in India resulted in almost 4,000 deaths. These incidents, along with other incidents, affected the reputation of the trade as industrial safety and environmental protection were given more focus. In response, the IChemE required safety to be part of every degree course that it accredited after 1982. By the 1970s, legislation and monitoring agencies were instituted in various countries, such as France, Germany, and the United States. In time, the systematic application of safety principles to chemical and other process plants began to be considered a specific discipline, known as process safety.
Recent progress
Advancements in computer science found applications for designing and managing plants, simplifying calculations and drawings that previously had to be done manually. The completion of the Human Genome Project is also seen as a major development, not only advancing chemical engineering but genetic engineering and genomics as well. Chemical engineering principles were used to produce DNA sequences in large quantities.
Concepts
Chemical engineering involves the application of several principles. Key concepts are presented below.
Plant design and construction
Chemical engineering design concerns the creation of plans, specifications, and economic analyses for pilot plants, new plants, or plant modifications. Design engineers often work in a consulting role, designing plants to meet clients' needs. Design is limited by several factors, including funding, government regulations, and safety standards. These constraints dictate a plant's choice of process, materials, and equipment.
Plant construction is coordinated by project engineers and project managers, depending on the size of the investment. A chemical engineer may do the job of project engineer full-time or part of the time, which requires additional training and job skills or act as a consultant to the project group. In the USA the education of chemical engineering graduates from the Baccalaureate programs accredited by ABET do not usually stress project engineering education, which can be obtained by specialized training, as electives, or from graduate programs. Project engineering jobs are some of the largest employers for chemical engineers.
Process design and analysis
A unit operation is a physical step in an individual chemical engineering process. Unit operations (such as crystallization, filtration, drying and evaporation) are used to prepare reactants, purifying and separating its products, recycling unspent reactants, and controlling energy transfer in reactors. On the other hand, a unit process is the chemical equivalent of a unit operation. Along with unit operations, unit processes constitute a process operation. Unit processes (such as nitration, hydrogenation, and oxidation involve the conversion of materials by biochemical, thermochemical and other means. Chemical engineers responsible for these are called process engineers.
Process design requires the definition of equipment types and sizes as well as how they are connected and the materials of construction. Details are often printed on a Process Flow Diagram which is used to control the capacity and reliability of a new or existing chemical factory.
Education for chemical engineers in the first college degree 3 or 4 years of study stresses the principles and practices of process design. The same skills are used in existing chemical plants to evaluate the efficiency and make recommendations for improvements.
Transport phenomena
Modeling and analysis of transport phenomena is essential for many industrial applications. Transport phenomena involve fluid dynamics, heat transfer and mass transfer, which are governed mainly by momentum transfer, energy transfer and transport of chemical species, respectively. Models often involve separate considerations for macroscopic, microscopic and molecular level phenomena. Modeling of transport phenomena, therefore, requires an understanding of applied mathematics.
Applications and practice
Chemical engineers develop economic ways of using materials and energy. Chemical engineers use chemistry and engineering to turn raw materials into usable products, such as medicine, petrochemicals, and plastics on a large-scale, industrial setting. They are also involved in waste management and research. Both applied and research facets could make extensive use of computers.
Chemical engineers may be involved in industry or university research where they are tasked with designing and performing experiments, by scaling up theoretical chemical reactions, to create better and safer methods for production, pollution control, and resource conservation. They may be involved in designing and constructing plants as a project engineer. Chemical engineers serving as project engineers use their knowledge in selecting optimal production methods and plant equipment to minimize costs and maximize safety and profitability. After plant construction, chemical engineering project managers may be involved in equipment upgrades, troubleshooting, and daily operations in either full-time or consulting roles.
See also
Related topics
Education for Chemical Engineers
English Engineering units
List of chemical engineering societies
List of chemical engineers
List of chemical process simulators
Outline of chemical engineering
Related fields and concepts
Biochemical engineering
Bioinformatics
Biological engineering
Biomedical engineering
Biomolecular engineering
Bioprocess engineering
Biotechnology
Biotechnology engineering
Catalysts
Ceramics
Chemical process modeling
Chemical reactor
Chemical technologist
Chemical weapons
Cheminformatics
Computational fluid dynamics
Corrosion engineering
Cost estimation
Earthquake engineering
Electrochemistry
Electrochemical engineering
Environmental engineering
Fischer Tropsch synthesis
Fluid dynamics
Food engineering
Fuel cell
Gasification
Heat transfer
Industrial catalysts
Industrial chemistry
Industrial gas
Mass transfer
Materials science
Metallurgy
Microfluidics
Mineral processing
Molecular engineering
Nanotechnology
Natural environment
Natural gas processing
Nuclear reprocessing
Oil exploration
Oil refinery
Paper engineering
Petroleum engineering
Pharmaceutical engineering
Plastics engineering
Polymers
Process control
Process design
Process development
Process engineering
Process miniaturization
Process safety
Semiconductor device fabrication
Separation processes (see also: separation of mixture)
Crystallization processes
Distillation processes
Membrane processes
Syngas production
Textile engineering
Thermodynamics
Transport phenomena
Unit operations
Water technology
Associations
American Institute of Chemical Engineers
Chemical Institute of Canada
European Federation of Chemical Engineering
Indian Institute of Chemical Engineers
Institution of Chemical Engineers
National Organization for the Professional Advancement of Black Chemists and Chemical Engineers
References
Bibliography
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Process engineering
Engineering disciplines | Chemical engineering | [
"Chemistry",
"Engineering"
] | 1,813 | [
"Process engineering",
"Chemical engineering",
"Mechanical engineering by discipline",
"nan"
] |
6,042 | https://en.wikipedia.org/wiki/Compact%20space | In mathematics, specifically general topology, compactness is a property that seeks to generalize the notion of a closed and bounded subset of Euclidean space. The idea is that a compact space has no "punctures" or "missing endpoints", i.e., it includes all limiting values of points. For example, the open interval (0,1) would not be compact because it excludes the limiting values of 0 and 1, whereas the closed interval [0,1] would be compact. Similarly, the space of rational numbers is not compact, because it has infinitely many "punctures" corresponding to the irrational numbers, and the space of real numbers is not compact either, because it excludes the two limiting values and . However, the extended real number line would be compact, since it contains both infinities. There are many ways to make this heuristic notion precise. These ways usually agree in a metric space, but may not be equivalent in other topological spaces.
One such generalization is that a topological space is sequentially compact if every infinite sequence of points sampled from the space has an infinite subsequence that converges to some point of the space. The Bolzano–Weierstrass theorem states that a subset of Euclidean space is compact in this sequential sense if and only if it is closed and bounded. Thus, if one chooses an infinite number of points in the closed unit interval , some of those points will get arbitrarily close to some real number in that space.
For instance, some of the numbers in the sequence accumulate to 0 (while others accumulate to 1).
Since neither 0 nor 1 are members of the open unit interval , those same sets of points would not accumulate to any point of it, so the open unit interval is not compact. Although subsets (subspaces) of Euclidean space can be compact, the entire space itself is not compact, since it is not bounded. For example, considering (the real number line), the sequence of points has no subsequence that converges to any real number.
Compactness was formally introduced by Maurice Fréchet in 1906 to generalize the Bolzano–Weierstrass theorem from spaces of geometrical points to spaces of functions. The Arzelà–Ascoli theorem and the Peano existence theorem exemplify applications of this notion of compactness to classical analysis. Following its initial introduction, various equivalent notions of compactness, including sequential compactness and limit point compactness, were developed in general metric spaces. In general topological spaces, however, these notions of compactness are not necessarily equivalent. The most useful notion — and the standard definition of the unqualified term compactness — is phrased in terms of the existence of finite families of open sets that "cover" the space, in the sense that each point of the space lies in some set contained in the family. This more subtle notion, introduced by Pavel Alexandrov and Pavel Urysohn in 1929, exhibits compact spaces as generalizations of finite sets. In spaces that are compact in this sense, it is often possible to patch together information that holds locally – that is, in a neighborhood of each point – into corresponding statements that hold throughout the space, and many theorems are of this character.
The term compact set is sometimes used as a synonym for compact space, but also often refers to a compact subspace of a topological space.
Historical development
In the 19th century, several disparate mathematical properties were understood that would later be seen as consequences of compactness. On the one hand, Bernard Bolzano (1817) had been aware that any bounded sequence of points (in the line or plane, for instance) has a subsequence that must eventually get arbitrarily close to some other point, called a limit point.
Bolzano's proof relied on the method of bisection: the sequence was placed into an interval that was then divided into two equal parts, and a part containing infinitely many terms of the sequence was selected.
The process could then be repeated by dividing the resulting smaller interval into smaller and smaller parts – until it closes down on the desired limit point. The full significance of Bolzano's theorem, and its method of proof, would not emerge until almost 50 years later when it was rediscovered by Karl Weierstrass.
In the 1880s, it became clear that results similar to the Bolzano–Weierstrass theorem could be formulated for spaces of functions rather than just numbers or geometrical points.
The idea of regarding functions as themselves points of a generalized space dates back to the investigations of Giulio Ascoli and Cesare Arzelà.
The culmination of their investigations, the Arzelà–Ascoli theorem, was a generalization of the Bolzano–Weierstrass theorem to families of continuous functions, the precise conclusion of which was that it was possible to extract a uniformly convergent sequence of functions from a suitable family of functions. The uniform limit of this sequence then played precisely the same role as Bolzano's "limit point". Towards the beginning of the twentieth century, results similar to that of Arzelà and Ascoli began to accumulate in the area of integral equations, as investigated by David Hilbert and Erhard Schmidt.
For a certain class of Green's functions coming from solutions of integral equations, Schmidt had shown that a property analogous to the Arzelà–Ascoli theorem held in the sense of mean convergence – or convergence in what would later be dubbed a Hilbert space. This ultimately led to the notion of a compact operator as an offshoot of the general notion of a compact space.
It was Maurice Fréchet who, in 1906, had distilled the essence of the Bolzano–Weierstrass property and coined the term compactness to refer to this general phenomenon (he used the term already in his 1904 paper which led to the famous 1906 thesis).
However, a different notion of compactness altogether had also slowly emerged at the end of the 19th century from the study of the continuum, which was seen as fundamental for the rigorous formulation of analysis.
In 1870, Eduard Heine showed that a continuous function defined on a closed and bounded interval was in fact uniformly continuous. In the course of the proof, he made use of a lemma that from any countable cover of the interval by smaller open intervals, it was possible to select a finite number of these that also covered it.
The significance of this lemma was recognized by Émile Borel (1895), and it was generalized to arbitrary collections of intervals by Pierre Cousin (1895) and Henri Lebesgue (1904). The Heine–Borel theorem, as the result is now known, is another special property possessed by closed and bounded sets of real numbers.
This property was significant because it allowed for the passage from local information about a set (such as the continuity of a function) to global information about the set (such as the uniform continuity of a function). This sentiment was expressed by , who also exploited it in the development of the integral now bearing his name. Ultimately, the Russian school of point-set topology, under the direction of Pavel Alexandrov and Pavel Urysohn, formulated Heine–Borel compactness in a way that could be applied to the modern notion of a topological space. showed that the earlier version of compactness due to Fréchet, now called (relative) sequential compactness, under appropriate conditions followed from the version of compactness that was formulated in terms of the existence of finite subcovers. It was this notion of compactness that became the dominant one, because it was not only a stronger property, but it could be formulated in a more general setting with a minimum of additional technical machinery, as it relied only on the structure of the open sets in a space.
Basic examples
Any finite space is compact; a finite subcover can be obtained by selecting, for each point, an open set containing it. A nontrivial example of a compact space is the (closed) unit interval of real numbers. If one chooses an infinite number of distinct points in the unit interval, then there must be some accumulation point among these points in that interval. For instance, the odd-numbered terms of the sequence get arbitrarily close to 0, while the even-numbered ones get arbitrarily close to 1. The given example sequence shows the importance of including the boundary points of the interval, since the limit points must be in the space itself — an open (or half-open) interval of the real numbers is not compact. It is also crucial that the interval be bounded, since in the interval , one could choose the sequence of points , of which no sub-sequence ultimately gets arbitrarily close to any given real number.
In two dimensions, closed disks are compact since for any infinite number of points sampled from a disk, some subset of those points must get arbitrarily close either to a point within the disc, or to a point on the boundary. However, an open disk is not compact, because a sequence of points can tend to the boundary – without getting arbitrarily close to any point in the interior. Likewise, spheres are compact, but a sphere missing a point is not since a sequence of points can still tend to the missing point, thereby not getting arbitrarily close to any point within the space. Lines and planes are not compact, since one can take a set of equally-spaced points in any given direction without approaching any point.
Definitions
Various definitions of compactness may apply, depending on the level of generality.
A subset of Euclidean space in particular is called compact if it is closed and bounded. This implies, by the Bolzano–Weierstrass theorem, that any infinite sequence from the set has a subsequence that converges to a point in the set. Various equivalent notions of compactness, such as sequential compactness and limit point compactness, can be developed in general metric spaces.
In contrast, the different notions of compactness are not equivalent in general topological spaces, and the most useful notion of compactness – originally called bicompactness – is defined using covers consisting of open sets (see Open cover definition below).
That this form of compactness holds for closed and bounded subsets of Euclidean space is known as the Heine–Borel theorem. Compactness, when defined in this manner, often allows one to take information that is known locally – in a neighbourhood of each point of the space – and to extend it to information that holds globally throughout the space. An example of this phenomenon is Dirichlet's theorem, to which it was originally applied by Heine, that a continuous function on a compact interval is uniformly continuous; here, continuity is a local property of the function, and uniform continuity the corresponding global property.
Open cover definition
Formally, a topological space is called compact if every open cover of has a finite subcover. That is, is compact if for every collection of open subsets of such that
there is a finite subcollection ⊆ such that
Some branches of mathematics such as algebraic geometry, typically influenced by the French school of Bourbaki, use the term quasi-compact for the general notion, and reserve the term compact for topological spaces that are both Hausdorff and quasi-compact. A compact set is sometimes referred to as a compactum, plural compacta.
Compactness of subsets
A subset of a topological space is said to be compact if it is compact as a subspace (in the subspace topology). That is, is compact if for every arbitrary collection of open subsets of such that
there is a finite subcollection ⊆ such that
Because compactness is a topological property, the compactness of a subset depends only on the subspace topology induced on it. It follows that, if , with subset equipped with the subspace topology, then is compact in if and only if is compact in .
Characterization
If is a topological space then the following are equivalent:
is compact; i.e., every open cover of has a finite subcover.
has a sub-base such that every cover of the space, by members of the sub-base, has a finite subcover (Alexander's sub-base theorem).
is Lindelöf and countably compact.
Any collection of closed subsets of with the finite intersection property has nonempty intersection.
Every net on has a convergent subnet (see the article on nets for a proof).
Every filter on has a convergent refinement.
Every net on has a cluster point.
Every filter on has a cluster point.
Every ultrafilter on converges to at least one point.
Every infinite subset of has a complete accumulation point.
For every topological space , the projection is a closed mapping (see proper map).
Every open cover linearly ordered by subset inclusion contains .
Bourbaki defines a compact space (quasi-compact space) as a topological space where each filter has a cluster point (i.e., 8. in the above).
Euclidean space
For any subset of Euclidean space, is compact if and only if it is closed and bounded; this is the Heine–Borel theorem.
As a Euclidean space is a metric space, the conditions in the next subsection also apply to all of its subsets. Of all of the equivalent conditions, it is in practice easiest to verify that a subset is closed and bounded, for example, for a closed interval or closed -ball.
Metric spaces
For any metric space , the following are equivalent (assuming countable choice):
is compact.
is complete and totally bounded (this is also equivalent to compactness for uniform spaces).
is sequentially compact; that is, every sequence in has a convergent subsequence whose limit is in (this is also equivalent to compactness for first-countable uniform spaces).
is limit point compact (also called weakly countably compact); that is, every infinite subset of has at least one limit point in .
is countably compact; that is, every countable open cover of has a finite subcover.
is an image of a continuous function from the Cantor set.
Every decreasing nested sequence of nonempty closed subsets in has a nonempty intersection.
Every increasing nested sequence of proper open subsets in fails to cover .
A compact metric space also satisfies the following properties:
Lebesgue's number lemma: For every open cover of , there exists a number such that every subset of of diameter < is contained in some member of the cover.
is second-countable, separable and Lindelöf – these three conditions are equivalent for metric spaces. The converse is not true; e.g., a countable discrete space satisfies these three conditions, but is not compact.
is closed and bounded (as a subset of any metric space whose restricted metric is ). The converse may fail for a non-Euclidean space; e.g. the real line equipped with the discrete metric is closed and bounded but not compact, as the collection of all singletons of the space is an open cover which admits no finite subcover. It is complete but not totally bounded.
Ordered spaces
For an ordered space (i.e. a totally ordered set equipped with the order topology), the following are equivalent:
is compact.
Every subset of has a supremum (i.e. a least upper bound) in .
Every subset of has an infimum (i.e. a greatest lower bound) in .
Every nonempty closed subset of has a maximum and a minimum element.
An ordered space satisfying (any one of) these conditions is called a complete lattice.
In addition, the following are equivalent for all ordered spaces , and (assuming countable choice) are true whenever is compact. (The converse in general fails if is not also metrizable.):
Every sequence in has a subsequence that converges in .
Every monotone increasing sequence in converges to a unique limit in .
Every monotone decreasing sequence in converges to a unique limit in .
Every decreasing nested sequence of nonempty closed subsets 1 ⊇ 2 ⊇ ... in has a nonempty intersection.
Every increasing nested sequence of proper open subsets 1 ⊆ 2 ⊆ ... in fails to cover .
Characterization by continuous functions
Let be a topological space and the ring of real continuous functions on .
For each , the evaluation map
given by is a ring homomorphism.
The kernel of is a maximal ideal, since the residue field is the field of real numbers, by the first isomorphism theorem. A topological space is pseudocompact if and only if every maximal ideal in has residue field the real numbers. For completely regular spaces, this is equivalent to every maximal ideal being the kernel of an evaluation homomorphism. There are pseudocompact spaces that are not compact, though.
In general, for non-pseudocompact spaces there are always maximal ideals in such that the residue field is a (non-Archimedean) hyperreal field. The framework of non-standard analysis allows for the following alternative characterization of compactness: a topological space is compact if and only if every point of the natural extension is infinitely close to a point of (more precisely, is contained in the monad of ).
Hyperreal definition
A space is compact if its hyperreal extension (constructed, for example, by the ultrapower construction) has the property that every point of is infinitely close to some point of . For example, an open real interval is not compact because its hyperreal extension contains infinitesimals, which are infinitely close to 0, which is not a point of .
Sufficient conditions
A closed subset of a compact space is compact.
A finite union of compact sets is compact.
A continuous image of a compact space is compact.
The intersection of any non-empty collection of compact subsets of a Hausdorff space is compact (and closed);
If is not Hausdorff then the intersection of two compact subsets may fail to be compact (see footnote for example).
The product of any collection of compact spaces is compact. (This is Tychonoff's theorem, which is equivalent to the axiom of choice.)
In a metrizable space, a subset is compact if and only if it is sequentially compact (assuming countable choice)
A finite set endowed with any topology is compact.
Properties of compact spaces
A compact subset of a Hausdorff space is closed.
If is not Hausdorff then a compact subset of may fail to be a closed subset of (see footnote for example).
If is not Hausdorff then the closure of a compact set may fail to be compact (see footnote for example).
In any topological vector space (TVS), a compact subset is complete. However, every non-Hausdorff TVS contains compact (and thus complete) subsets that are not closed.
If and are disjoint compact subsets of a Hausdorff space , then there exist disjoint open sets and in such that and .
A continuous bijection from a compact space into a Hausdorff space is a homeomorphism.
A compact Hausdorff space is normal and regular.
If a space is compact and Hausdorff, then no finer topology on is compact and no coarser topology on is Hausdorff.
If a subset of a metric space is compact then it is -bounded.
Functions and compact spaces
Since a continuous image of a compact space is compact, the extreme value theorem holds for such spaces: a continuous real-valued function on a nonempty compact space is bounded above and attains its supremum.
(Slightly more generally, this is true for an upper semicontinuous function.) As a sort of converse to the above statements, the pre-image of a compact space under a proper map is compact.
Compactifications
Every topological space is an open dense subspace of a compact space having at most one point more than , by the Alexandroff one-point compactification.
By the same construction, every locally compact Hausdorff space is an open dense subspace of a compact Hausdorff space having at most one point more than .
Ordered compact spaces
A nonempty compact subset of the real numbers has a greatest element and a least element.
Let be a simply ordered set endowed with the order topology.
Then is compact if and only if is a complete lattice (i.e. all subsets have suprema and infima).
Examples
Any finite topological space, including the empty set, is compact. More generally, any space with a finite topology (only finitely many open sets) is compact; this includes in particular the trivial topology.
Any space carrying the cofinite topology is compact.
Any locally compact Hausdorff space can be turned into a compact space by adding a single point to it, by means of Alexandroff one-point compactification. The one-point compactification of is homeomorphic to the circle ; the one-point compactification of is homeomorphic to the sphere . Using the one-point compactification, one can also easily construct compact spaces which are not Hausdorff, by starting with a non-Hausdorff space.
The right order topology or left order topology on any bounded totally ordered set is compact. In particular, Sierpiński space is compact.
No discrete space with an infinite number of points is compact. The collection of all singletons of the space is an open cover which admits no finite subcover. Finite discrete spaces are compact.
In carrying the lower limit topology, no uncountable set is compact.
In the cocountable topology on an uncountable set, no infinite set is compact. Like the previous example, the space as a whole is not locally compact but is still Lindelöf.
The closed unit interval is compact. This follows from the Heine–Borel theorem. The open interval is not compact: the open cover for does not have a finite subcover. Similarly, the set of rational numbers in the closed interval is not compact: the sets of rational numbers in the intervals cover all the rationals in [0, 1] for but this cover does not have a finite subcover. Here, the sets are open in the subspace topology even though they are not open as subsets of .
The set of all real numbers is not compact as there is a cover of open intervals that does not have a finite subcover. For example, intervals , where takes all integer values in , cover but there is no finite subcover.
On the other hand, the extended real number line carrying the analogous topology is compact; note that the cover described above would never reach the points at infinity and thus would not cover the extended real line. In fact, the set has the homeomorphism to [−1, 1] of mapping each infinity to its corresponding unit and every real number to its sign multiplied by the unique number in the positive part of interval that results in its absolute value when divided by one minus itself, and since homeomorphisms preserve covers, the Heine-Borel property can be inferred.
For every natural number , the -sphere is compact. Again from the Heine–Borel theorem, the closed unit ball of any finite-dimensional normed vector space is compact. This is not true for infinite dimensions; in fact, a normed vector space is finite-dimensional if and only if its closed unit ball is compact.
On the other hand, the closed unit ball of the dual of a normed space is compact for the weak-* topology. (Alaoglu's theorem)
The Cantor set is compact. In fact, every compact metric space is a continuous image of the Cantor set.
Consider the set of all functions from the real number line to the closed unit interval, and define a topology on so that a sequence in converges towards if and only if converges towards for all real numbers . There is only one such topology; it is called the topology of pointwise convergence or the product topology. Then is a compact topological space; this follows from the Tychonoff theorem.
A subset of the Banach space of real-valued continuous functions on a compact Hausdorff space is relatively compact if and only if it is equicontinuous and pointwise bounded (Arzelà–Ascoli theorem).
Consider the set of all functions satisfying the Lipschitz condition for all . Consider on the metric induced by the uniform distance Then by the Arzelà–Ascoli theorem the space is compact.
The spectrum of any bounded linear operator on a Banach space is a nonempty compact subset of the complex numbers . Conversely, any compact subset of arises in this manner, as the spectrum of some bounded linear operator. For instance, a diagonal operator on the Hilbert space may have any compact nonempty subset of as spectrum.
The space of Borel probability measures on a compact Hausdorff space is compact for the vague topology, by the Alaoglu theorem.
A collection of probability measures on the Borel sets of Euclidean space is called tight if, for any positive epsilon, there exists a compact subset containing all but at most epsilon of the mass of each of the measures. Helly's theorem then asserts that a collection of probability measures is relatively compact for the vague topology if and only if it is tight.
Algebraic examples
Topological groups such as an orthogonal group are compact, while groups such as a general linear group are not.
Since the -adic integers are homeomorphic to the Cantor set, they form a compact set.
Any global field K is a discrete additive subgroup of its adele ring, and the quotient space is compact. This was used in John Tate's thesis to allow harmonic analysis to be used in number theory.
The spectrum of any commutative ring with the Zariski topology (that is, the set of all prime ideals) is compact, but never Hausdorff (except in trivial cases). In algebraic geometry, such topological spaces are examples of quasi-compact schemes, "quasi" referring to the non-Hausdorff nature of the topology.
The spectrum of a Boolean algebra is compact, a fact which is part of the Stone representation theorem. Stone spaces, compact totally disconnected Hausdorff spaces, form the abstract framework in which these spectra are studied. Such spaces are also useful in the study of profinite groups.
The structure space of a commutative unital Banach algebra is a compact Hausdorff space.
The Hilbert cube is compact, again a consequence of Tychonoff's theorem.
A profinite group (e.g. Galois group) is compact.
See also
Compactly generated space
Compactness theorem
Eberlein compactum
Exhaustion by compact sets
Lindelöf space
Metacompact space
Noetherian topological space
Orthocompact space
Paracompact space
Quasi-compact morphism
Precompact set - also called totally bounded
Relatively compact subspace
Totally bounded
Notes
References
Bibliography
.
.
(Purely analytic proof of the theorem that between any two values which give results of opposite sign, there lies at least one real root of the equation).
.
External links
Compactness (mathematics)
General topology
Properties of topological spaces
Topology | Compact space | [
"Physics",
"Mathematics"
] | 5,624 | [
"General topology",
"Properties of topological spaces",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
6,050 | https://en.wikipedia.org/wiki/List%20of%20equations%20in%20classical%20mechanics | Classical mechanics is the branch of physics used to describe the motion of macroscopic objects. It is the most familiar of the theories of physics. The concepts it covers, such as mass, acceleration, and force, are commonly used and known. The subject is based upon a three-dimensional Euclidean space with fixed axes, called a frame of reference. The point of concurrency of the three axes is known as the origin of the particular space.
Classical mechanics utilises many equations—as well as other mathematical concepts—which relate various physical quantities to one another. These include differential equations, manifolds, Lie groups, and ergodic theory. This article gives a summary of the most important of these.
This article lists equations from Newtonian mechanics, see analytical mechanics for the more general formulation of classical mechanics (which includes Lagrangian and Hamiltonian mechanics).
Classical mechanics
Mass and inertia
Derived kinematic quantities
Derived dynamic quantities
General energy definitions
Every conservative force has a potential energy. By following two principles one can consistently assign a non-relative value to U:
Wherever the force is zero, its potential energy is defined to be zero as well.
Whenever the force does work, potential energy is lost.
Generalized mechanics
Kinematics
In the following rotational definitions, the angle can be any angle about the specified axis of rotation. It is customary to use θ, but this does not have to be the polar angle used in polar coordinate systems. The unit axial vector
defines the axis of rotation, = unit vector in direction of , = unit vector tangential to the angle.
Dynamics
Precession
The precession angular speed of a spinning top is given by:
where w is the weight of the spinning flywheel.
Energy
The mechanical work done by an external agent on a system is equal to the change in kinetic energy of the system:
General work-energy theorem (translation and rotation)
The work done W by an external agent which exerts a force F (at r) and torque τ on an object along a curved path C is:
where θ is the angle of rotation about an axis defined by a unit vector n.
Kinetic energy
The change in kinetic energy for an object initially traveling at speed and later at speed is:
Elastic potential energy
For a stretched spring fixed at one end obeying Hooke's law, the elastic potential energy is
where r2 and r1 are collinear coordinates of the free end of the spring, in the direction of the extension/compression, and k is the spring constant.
Euler's equations for rigid body dynamics
Euler also worked out analogous laws of motion to those of Newton, see Euler's laws of motion. These extend the scope of Newton's laws to rigid bodies, but are essentially the same as above. A new equation Euler formulated is:
where I is the moment of inertia tensor.
General planar motion
The previous equations for planar motion can be used here: corollaries of momentum, angular momentum etc. can immediately follow by applying the above definitions. For any object moving in any path in a plane,
the following general results apply to the particle.
Central force motion
For a massive body moving in a central potential due to another object, which depends only on the radial separation between the centers of masses of the two objects, the equation of motion is:
Equations of motion (constant acceleration)
These equations can be used only when acceleration is constant. If acceleration is not constant then the general calculus equations above must be used, found by integrating the definitions of position, velocity and acceleration (see above).
Galilean frame transforms
For classical (Galileo-Newtonian) mechanics, the transformation law from one inertial or accelerating (including rotation) frame (reference frame traveling at constant velocity - including zero) to another is the Galilean transform.
Unprimed quantities refer to position, velocity and acceleration in one frame F; primed quantities refer to position, velocity and acceleration in another frame F' moving at translational velocity V or angular velocity Ω relative to F. Conversely F moves at velocity (—V or —Ω) relative to F'. The situation is similar for relative accelerations.
Mechanical oscillators
SHM, DHM, SHO, and DHO refer to simple harmonic motion, damped harmonic motion, simple harmonic oscillator and damped harmonic oscillator respectively.
See also
List of physics formulae
Defining equation (physical chemistry)
Constitutive equation
Mechanics
Optics
Electromagnetism
Thermodynamics
Acoustics
Isaac Newton
List of equations in wave theory
List of relativistic equations
List of equations in fluid mechanics
List of equations in gravitation
List of electromagnetism equations
List of photonics equations
List of equations in quantum mechanics
List of equations in nuclear and particle physics
Notes
References
Classical mechanics
Classical Mechanics | List of equations in classical mechanics | [
"Physics"
] | 980 | [
"Equations of physics",
"Mechanics",
"Classical mechanics",
"Lists of physics equations"
] |
6,058 | https://en.wikipedia.org/wiki/Collagen | Collagen () is the main structural protein in the extracellular matrix of the connective tissues of many animals. It is the most abundant protein in mammals, making up 25% to 35% of protein content. Amino acids are bound together to form a triple helix of elongated fibril known as a collagen helix. It is mostly found in cartilage, bones, tendons, ligaments, and skin. Vitamin C is vital for collagen synthesis, while Vitamin E improves its production.
Depending on the degree of mineralization, collagen tissues may be rigid (bone) or compliant (tendon) or have a gradient from rigid to compliant (cartilage). Collagen is also abundant in corneas, blood vessels, the gut, intervertebral discs, and the dentin in teeth. In muscle tissue, it serves as a major component of the endomysium. Collagen constitutes 1% to 2% of muscle tissue and 6% by weight of skeletal muscle. The fibroblast is the most common cell creating collagen in animals. Gelatin, which is used in food and industry, is collagen that was irreversibly hydrolyzed using heat, basic solutions, or weak acids.
Etymology
The name collagen comes from the Greek κόλλα (kólla), meaning "glue", and suffix -γέν, -gen, denoting "producing".
Types
As of 2011, 28 types of human collagen have been identified, described, and classified according to their structure. This diversity shows collagen's diverse functionality. All of the types contain at least one triple helix. Over 90% of the collagen in humans is type I & III collagen.
Fibrillar (type I, II, III, V, XI)
Non-fibrillar
FACIT (fibril-associated collagens with interrupted triple helices) (types IX, XII, XIV, XIX, XXI)
Short-chain (types VIII, X)
Basement membrane (type IV)
Multiplexin (multiple triple helix domains with interruptions) (types XV, XVIII)
MACIT (membrane-associated collagens with interrupted triple helices) (types XIII, XVII)
Microfibril-forming (type VI)
Anchoring fibrils (type VII)
The five most common types are:
Type I: skin, tendon, vasculature, organs, bone (main component of the organic part of bone)
Type II: cartilage (main collagenous component of cartilage)
Type III: reticulate (main component of reticular fibers), commonly found alongside type I
Type IV: forms basal lamina, the epithelium-secreted layer of the basement membrane
Type V: cell surfaces, hair, and placenta
In humans
Cardiac
The collagenous cardiac skeleton which includes the four heart valve rings, is histologically, elastically and uniquely bound to cardiac muscle. The cardiac skeleton also includes the separating septa of the heart chambers – the interventricular septum and the atrioventricular septum. Collagen contribution to the measure of cardiac performance summarily represents a continuous torsional force opposed to the fluid mechanics of blood pressure emitted from the heart. The collagenous structure that divides the upper chambers of the heart from the lower chambers is an impermeable membrane that excludes both blood and electrical impulses through typical physiological means. With support from collagen, atrial fibrillation never deteriorates to ventricular fibrillation. Collagen is layered in variable densities with smooth muscle mass. The mass, distribution, age, and density of collagen all contribute to the compliance required to move blood back and forth. Individual cardiac valvular leaflets are folded into shape by specialized collagen under variable pressure. Gradual calcium deposition within collagen occurs as a natural function of aging. Calcified points within collagen matrices show contrast in a moving display of blood and muscle, enabling methods of cardiac imaging technology to arrive at ratios essentially stating blood in (cardiac input) and blood out (cardiac output). Pathology of the collagen underpinning of the heart is understood within the category of connective tissue disease.
Bone grafts
As the skeleton forms the structure of the body, it is vital that it maintains its strength, even after breaks and injuries. Collagen is used in bone grafting because its triple-helix structure makes it a very strong molecule. It is ideal for use in bones, as it does not compromise the structural integrity of the skeleton. The triple helical structure prevents collagen from being broken down by enzymes, it enables adhesiveness of cells and it is important for the proper assembly of the extracellular matrix.
Tissue regeneration
Collagen scaffolds are used in tissue regeneration, whether in sponges, thin sheets, gels, or fibers. Collagen has favorable properties for tissue regeneration, such as pore structure, permeability, hydrophilicity, and stability in vivo. Collagen scaffolds also support deposition of cells, such as osteoblasts and fibroblasts, and once inserted, facilitate growth to proceed normally.
Reconstructive surgery
Collagens are widely used in the construction of artificial skin substitutes used for managing severe burns and wounds. These collagens may be derived from cow, horse, pig, or even human sources; and are sometimes used in combination with silicones, glycosaminoglycans, fibroblasts, growth factors and other substances.
Wound healing
Collagen is one of the body's key natural resources and a component of skin tissue that can benefit all stages of wound healing. When collagen is made available to the wound bed, closure can occur. This avoids wound deterioration and procedures such as amputation.
Collagen is used as a natural wound dressing because it has properties that artificial wound dressings do not have. It resists bacteria, which is vitally important in wound dressing. As a burn dressing, collagen helps it heal fast by helping granulation tissue to grow over the burn.
Throughout the four phases of wound healing, collagen performs the following functions:
Guiding: collagen fibers guide fibroblasts because they migrate along a connective tissue matrix.
Chemotaxis: collagen fibers have a large surface area which attracts fibrogenic cells which help healing.
Nucleation: in the presence of certain neutral salt molecules, collagen can act as a nucleating agent causing formation of fibrillar structures.
Hemostasis: Blood platelets interact with the collagen to make a hemostatic plug.
Use in basic research
Collagen is used in laboratory studies for cell culture, studying cell behavior and cellular interactions with the extracellular environment. Collagen is also widely used as a bioink for 3D bioprinting and biofabrication of 3D tissue models.
Biology
The collagen protein is composed of a triple helix, which generally consists of two identical chains (α1) and an additional chain that differs slightly in its chemical composition (α2). The amino acid composition of collagen is atypical for proteins, particularly with respect to its high hydroxyproline content. The most common motifs in collagen's amino acid sequence are glycine-proline-X and glycine-X-hydroxyproline, where X is any amino acid other than glycine, proline or hydroxyproline.
The table below lists average amino acid composition for fish and mammal skin.
Synthesis
First, a three-dimensional stranded structure is assembled, mostly composed of the amino acids glycine and proline. This is the collagen precursor procollagen. Then, procollagen is modified by the addition of hydroxyl groups to the amino acids proline and lysine. This step is important for later glycosylation and the formation of collagen's triple helix structure. Because the hydroxylase enzymes performing these reactions require vitamin C as a cofactor, a long-term deficiency in this vitamin results in impaired collagen synthesis and scurvy. These hydroxylation reactions are catalyzed by the enzymes prolyl 4-hydroxylase and lysyl hydroxylase. The reaction consumes one ascorbate molecule per hydroxylation. Collagen synthesis occurs inside and outside cells.
The most common form of collagen is fibrillary collagen. Another common form is meshwork collagen, which is often involved in the formation of filtration systems. All types of collagen are triple helices, but differ in the make-up of their alpha peptides created in step 2. Below we discuss the formation of fibrillary collagen.
Transcription of mRNA: Synthesis begins with turning on genes associated with the formation of a particular alpha peptide (typically alpha 1, 2 or 3). About 44 genes are associated with collagen formation, each coding for a specific mRNA sequence, and are typically named with the "COL" prefix.
Pre-pro-peptide formation: The created mRNA exits the cell nucleus into the cytoplasm. There, it links with the ribosomal subunits and is translated into a peptide. The peptide goes into the endoplasmic reticulum for post-translational processing. It is directed there by a signal recognition particle on the endoplasmic reticulum, which recognizes the peptide's signal sequence (the early part of the sequence). The processed product is a pre-pro-peptide called preprocollagen.
Pro-collagen formation: Three modifications of the pre-pro-peptide form the alpha peptide:
The signal peptide on the N-terminal is removed. The molecule is now called propeptide.
Lysines and prolines are hydroxylated by the enzymes 'prolyl hydroxylase' and 'lysyl hydroxylase', producing hydroxyproline and hydroxylysine. This helps in cross-linking the alpha peptides. This enzymatic step requires vitamin C as a cofactor. In scurvy, the lack of hydroxylation of prolines and lysines causes a looser triple helix (which is formed by three alpha peptides).
Glycosylation occurs by adding either glucose or galactose monomers onto the hydroxyl groups that were placed onto lysines, but not on prolines.
Three of the hydroxylated and glycosylated propeptides twist into a triple helix (except for its ends), forming procollagen. It is packaged into a transfer vesicle destined for the Golgi apparatus.
Modification and secretion: In the Golgi apparatus, the procollagen goes through one last post-translational modification, adding oligosaccharides (not monosaccharides as in step 3). Then it is packaged into a secretory vesicle to be secreted from the cell.
Tropocollagen formation: Outside the cell, membrane-bound enzymes called collagen peptidases remove the unwound ends of the molecule, producing tropocollagen. Defects in this step produce various collagenopathies called Ehlers–Danlos syndrome. This step is absent when synthesizing type III, a type of fibrillar collagen.
Collagen fibril formation: Lysyl oxidase, a copper-dependent enzyme, acts on lysines and hydroxylysines, producing aldehyde groups, which eventually form covalent bonds between tropocollagen molecules. This polymer of tropocollagen is called a collagen fibril.
Amino acids
Collagen has an unusual amino acid composition and sequence:
Glycine is found at almost every third residue.
Proline makes up about 17% of collagen.
Collagen contains two unusual derivative amino acids not directly inserted during translation. These amino acids are found at specific locations relative to glycine and are modified post-translationally by different enzymes, both of which require vitamin C as a cofactor.
Hydroxyproline derived from proline
Hydroxylysine derived from lysine – depending on the type of collagen, varying numbers of hydroxylysines are glycosylated (mostly having disaccharides attached).
Cortisol stimulates degradation of (skin) collagen into amino acids.
Collagen I formation
Most collagen forms in a similar manner, but the following process is typical for type I:
Inside the cell
Two types of alpha chains – alpha-1 and alpha 2, are formed during translation on ribosomes along the rough endoplasmic reticulum (RER). These peptide chains known as preprocollagen, have registration peptides on each end and a signal peptide.
Polypeptide chains are released into the lumen of the RER.
Signal peptides are cleaved inside the RER and the chains are now known as pro-alpha chains.
Hydroxylation of lysine and proline amino acids occurs inside the lumen. This process is dependent on and consumes ascorbic acid (vitamin C) as a cofactor.
Glycosylation of specific hydroxylysine residues occurs.
Triple alpha helical structure is formed inside the endoplasmic reticulum from two alpha-1 chains and one alpha-2 chain.
Procollagen is shipped to the Golgi apparatus, where it is packaged and secreted into extracellular space by exocytosis.
Outside the cell
Registration peptides are cleaved and tropocollagen is formed by procollagen peptidase.
Multiple tropocollagen molecules form collagen fibrils, via covalent cross-linking (aldol reaction) by lysyl oxidase which links hydroxylysine and lysine residues. Multiple collagen fibrils form into collagen fibers.
Collagen may be attached to cell membranes via several types of protein, including fibronectin, laminin, fibulin and integrin.
Molecular structure
A single collagen molecule, tropocollagen, is used to make up larger collagen aggregates, such as fibrils. It is approximately 300 nm long and 1.5 nm in diameter, and it is made up of three polypeptide strands (called alpha peptides, see step 2), each of which has the conformation of a left-handed helix – this should not be confused with the right-handed alpha helix. These three left-handed helices are twisted together into a right-handed triple helix or "super helix", a cooperative quaternary structure stabilized by many hydrogen bonds. With type I collagen and possibly all fibrillar collagens, if not all collagens, each triple-helix associates into a right-handed super-super-coil referred to as the collagen microfibril. Each microfibril is interdigitated with its neighboring microfibrils to a degree that might suggest they are individually unstable, although within collagen fibrils, they are so well ordered as to be crystalline.
A distinctive feature of collagen is the regular arrangement of amino acids in each of the three chains of these collagen subunits. The sequence often follows the pattern Gly-Pro-X or Gly-X-Hyp, where X may be any of various other amino acid residues. Proline or hydroxyproline constitute about 1/6 of the total sequence. With glycine accounting for the 1/3 of the sequence, this means approximately half of the collagen sequence is not glycine, proline or hydroxyproline, a fact often missed due to the distraction of the unusual GX1X2 character of collagen alpha-peptides. The high glycine content of collagen is important with respect to stabilization of the collagen helix as this allows the very close association of the collagen fibers within the molecule, facilitating hydrogen bonding and the formation of intermolecular cross-links. This kind of regular repetition and high glycine content is found in only a few other fibrous proteins, such as silk fibroin.
Collagen is not only a structural protein. Due to its key role in the determination of cell phenotype, cell adhesion, tissue regulation, and infrastructure, many sections of its non-proline-rich regions have cell or matrix association/regulation roles. The relatively high content of proline and hydroxyproline rings, with their geometrically constrained carboxyl and (secondary) amino groups, along with the rich abundance of glycine, accounts for the tendency of the individual polypeptide strands to form left-handed helices spontaneously, without any intrachain hydrogen bonding.
Because glycine is the smallest amino acid with no side chain, it plays a unique role in fibrous structural proteins. In collagen, Gly is required at every third position because the assembly of the triple helix puts this residue at the interior (axis) of the helix, where there is no space for a larger side group than glycine's single hydrogen atom. For the same reason, the rings of the Pro and Hyp must point outward. These two amino acids help stabilize the triple helix – Hyp even more so than Pro; a lower concentration of them is required in animals such as fish, whose body temperatures are lower than most warm-blooded animals. Lower proline and hydroxyproline contents are characteristic of cold-water, but not warm-water fish; the latter tend to have similar proline and hydroxyproline contents to mammals. The lower proline and hydroxyproline contents of cold-water fish and other poikilotherm animals leads to their collagen having a lower thermal stability than mammalian collagen. This lower thermal stability means that gelatin derived from fish collagen is not suitable for many food and industrial applications.
The tropocollagen subunits spontaneously self-assemble, with regularly staggered ends, into even larger arrays in the extracellular spaces of tissues. Additional assembly of fibrils is guided by fibroblasts, which deposit fully formed fibrils from fibripositors. In the fibrillar collagens, molecules are staggered to adjacent molecules by about 67 nm (a unit that is referred to as 'D' and changes depending upon the hydration state of the aggregate). In each D-period repeat of the microfibril, there is a part containing five molecules in cross-section, called the "overlap", and a part containing only four molecules, called the "gap". These overlap and gap regions are retained as microfibrils assemble into fibrils, and are thus viewable using electron microscopy. The triple helical tropocollagens in the microfibrils are arranged in a quasihexagonal packing pattern.
There is some covalent crosslinking within the triple helices and a variable amount of covalent crosslinking between tropocollagen helices forming well-organized aggregates (such as fibrils). Larger fibrillar bundles are formed with the aid of several different classes of proteins (including different collagen types), glycoproteins, and proteoglycans to form the different types of mature tissues from alternate combinations of the same key players. Collagen's insolubility was a barrier to the study of monomeric collagen until it was found that tropocollagen from young animals can be extracted because it is not yet fully crosslinked. However, advances in microscopy techniques (i.e. electron microscopy (EM) and atomic force microscopy (AFM)) and X-ray diffraction have enabled researchers to obtain increasingly detailed images of collagen structure in situ. These later advances are particularly important to better understanding the way in which collagen structure affects cell–cell and cell–matrix communication and how tissues are constructed in growth and repair and changed in development and disease. For example, using AFM–based nanoindentation it has been shown that a single collagen fibril is a heterogeneous material along its axial direction with significantly different mechanical properties in its gap and overlap regions, correlating with its different molecular organizations in these two regions.
Collagen fibrils/aggregates are arranged in different combinations and concentrations in various tissues to provide varying tissue properties. In bone, entire collagen triple helices lie in a parallel, staggered array. 40 nm gaps between the ends of the tropocollagen subunits (approximately equal to the gap region) probably serve as nucleation sites for the deposition of long, hard, fine crystals of the mineral component, which is hydroxylapatite (approximately) Ca10(OH)2(PO4)6. Type I collagen gives bone its tensile strength.
Associated disorders
Collagen-related diseases most commonly arise from genetic defects or nutritional deficiencies that affect the biosynthesis, assembly, posttranslational modification, secretion, or other processes involved in normal collagen production.
In addition to the above-mentioned disorders, excessive deposition of collagen occurs in scleroderma.
Diseases
One thousand mutations have been identified in 12 out of more than 20 types of collagen. These mutations can lead to various diseases at the tissue level.
Osteogenesis imperfecta – Caused by a mutation in type 1 collagen, dominant autosomal disorder, results in weak bones and irregular connective tissue, some cases can be mild while others can be lethal. Mild cases have lowered levels of collagen type 1 while severe cases have structural defects in collagen.
Chondrodysplasias – Skeletal disorder believed to be caused by a mutation in type 2 collagen, further research is being conducted to confirm this.
Ehlers–Danlos syndrome – Thirteen different types of this disorder, which lead to deformities in connective tissue, are known. Some of the rarer types can be lethal, leading to the rupture of arteries. Each syndrome is caused by a different mutation. For example, the vascular type (vEDS) of this disorder is caused by a mutation in collagen type 3.
Alport syndrome – Can be passed on genetically, usually as X-linked dominant, but also as both an autosomal dominant and autosomal recessive disorder, those with the condition have problems with their kidneys and eyes, loss of hearing can also develop during the childhood or adolescent years.
Knobloch syndrome – Caused by a mutation in the COL18A1 gene that codes for the production of collagen XVIII. Patients present with protrusion of the brain tissue and degeneration of the retina; an individual who has family members with the disorder is at an increased risk of developing it themselves since there is a hereditary link.
Animal harvesting
When not synthesized, collagen can be harvested from animal skin. This has led to deforestation as has occurred in Paraguay where large collagen producers buy large amounts of cattle hides from regions that have been clear-cut for cattle grazing.
Characteristics
Collagen is one of the long, fibrous structural proteins whose functions are quite different from those of globular proteins, such as enzymes. Tough bundles of collagen called collagen fibers are a major component of the extracellular matrix that supports most tissues and gives cells structure from the outside, but collagen is also found inside certain cells. Collagen has great tensile strength, and is the main component of fascia, cartilage, ligaments, tendons, bone and skin. Along with elastin and soft keratin, it is responsible for skin strength and elasticity, and its degradation leads to wrinkles that accompany aging. It strengthens blood vessels and plays a role in tissue development. It is present in the cornea and lens of the eye in crystalline form. It may be one of the most abundant proteins in the fossil record, given that it appears to fossilize frequently, even in bones from the Mesozoic and Paleozoic.
Mechanical properties
Collagen is a complex hierarchical material with mechanical properties that vary significantly across different scales.
On the molecular scale, atomistic and course-grained modeling simulations, as well as numerous experimental methods, have led to several estimates of the Young's modulus of collagen at the molecular level. Only above a certain strain rate is there a strong relationship between elastic modulus and strain rate, possibly due to the large number of atoms in a collagen molecule. The length of the molecule is also important, where longer molecules have lower tensile strengths than shorter ones due to short molecules having a large proportion of hydrogen bonds being broken and reformed.
On the fibrillar scale, collagen has a lower modulus compared to the molecular scale, and varies depending on geometry, scale of observation, deformation state, and hydration level. By increasing the crosslink density from zero to 3 per molecule, the maximum stress the fibril can support increases from 0.5 GPa to 6 GPa.
Limited tests have been done on the tensile strength of the collagen fiber, but generally it has been shown to have a lower Young's modulus compared to fibrils.
When studying the mechanical properties of collagen, tendon is often chosen as the ideal material because it is close to a pure and aligned collagen structure. However, at the macro, tissue scale, the vast number of structures that collagen fibers and fibrils can be arranged into results in highly variable properties. For example, tendon has primarily parallel fibers, whereas skin consists of a net of wavy fibers, resulting in a much higher strength and lower ductility in tendon compared to skin. The mechanical properties of collagen at multiple hierarchical levels is given.
Collagen is known to be a viscoelastic solid. When the collagen fiber is modeled as two Kelvin-Voigt models in series, each consisting of a spring and a dashpot in parallel, the strain in the fiber can be modeled according to the following equation:
where α, β, and γ are defined materials properties, εD is fibrillar strain, and εT is total strain.
Uses
Collagen has a wide variety of applications, from food to medical. In the medical industry, it is used in cosmetic surgery and burn surgery. In the food sector, one use example is in casings for sausages.
If collagen is subject to sufficient denaturation, such as by heating, the three tropocollagen strands separate partially or completely into globular domains, containing a different secondary structure to the normal collagen polyproline II (PPII) of random coils. This process describes the formation of gelatin, which is used in many foods, including flavored gelatin desserts. Besides food, gelatin has been used in pharmaceutical, cosmetic, and photography industries. It is also used as a dietary supplement, and has been advertised as a potential remedy against the ageing process.
From the Greek for glue, kolla, the word collagen means "glue producer" and refers to the early process of boiling the skin and sinews of horses and other animals to obtain glue. Collagen adhesive was used by Egyptians about 4,000 years ago, and Native Americans used it in bows about 1,500 years ago. The oldest glue in the world, carbon-dated as more than 8,000 years old, was found to be collagen – used as a protective lining on rope baskets and embroidered fabrics, to hold utensils together, and in crisscross decorations on human skulls. Collagen normally converts to gelatin, but survived due to dry conditions. Animal glues are thermoplastic, softening again upon reheating, so they are still used in making musical instruments such as fine violins and guitars, which may have to be reopened for repairs – an application incompatible with tough, synthetic plastic adhesives, which are permanent. Animal sinews and skins, including leather, have been used to make useful articles for millennia.
Gelatin-resorcinol-formaldehyde glue (and with formaldehyde replaced by less-toxic pentanedial and ethanedial) has been used to repair experimental incisions in rabbit lungs.
Cosmetics
Bovine collagen is widely used in dermal fillers for aesthetic correction of wrinkles and skin aging. Collagen cremes are also widely sold even though collagen cannot penetrate the skin because its fibers are too large. Collagen is a vital protein in skin, hair, nails, and other tissues. Its production decreases with age and factors like sun damage and smoking. Collagen supplements, derived from sources like fish and cattle, are marketed to improve skin, hair, and nails. Studies show some skin benefits, but these supplements often contain other beneficial ingredients, making it unclear if collagen alone is effective. There's minimal evidence supporting collagen's benefits for hair and nails. Overall, the effectiveness of oral collagen supplements is not well-proven, and focusing on a healthy lifestyle and proven skincare methods like sun protection is recommended.
History
The molecular and packing structures of collagen eluded scientists over decades of research. The first evidence that it possesses a regular structure at the molecular level was presented in the mid-1930s. Research then concentrated on the conformation of the collagen monomer, producing several competing models, although correctly dealing with the conformation of each individual peptide chain. The triple-helical "Madras" model, proposed by G. N. Ramachandran in 1955, provided an accurate model of quaternary structure in collagen. This model was supported by further studies of higher resolution in the late 20th century.
The packing structure of collagen has not been defined to the same degree outside of the fibrillar collagen types, although it has been long known to be hexagonal. As with its monomeric structure, several conflicting models propose either that the packing arrangement of collagen molecules is 'sheet-like', or is microfibrillar. The microfibrillar structure of collagen fibrils in tendon, cornea and cartilage was imaged directly by electron microscopy in the late 20th century and early 21st century. The microfibrillar structure of rat tail tendon was modeled as being closest to the observed structure, although it oversimplified the topological progression of neighboring collagen molecules, and so did not predict the correct conformation of the discontinuous D-periodic pentameric arrangement termed microfibril.
See also
Collagen hybridizing peptide, a peptide that can bind to denatured collagen
Hypermobility spectrum disorder
Metalloprotease inhibitor
Osteoid, a collagen-containing component of bone
Collagen loss
References
Structural proteins
Edible thickening agents
Aging-related proteins | Collagen | [
"Biology"
] | 6,528 | [
"Senescence",
"Aging-related proteins"
] |
6,061 | https://en.wikipedia.org/wiki/CNO%20cycle | The CNO cycle (for carbon–nitrogen–oxygen; sometimes called Bethe–Weizsäcker cycle after Hans Albrecht Bethe and Carl Friedrich von Weizsäcker) is one of the two known sets of fusion reactions by which stars convert hydrogen to helium, the other being the proton–proton chain reaction (p–p cycle), which is more efficient at the Sun's core temperature. The CNO cycle is hypothesized to be dominant in stars that are more than 1.3 times as massive as the Sun.
Unlike the proton-proton reaction, which consumes all its constituents, the CNO cycle is a catalytic cycle. In the CNO cycle, four protons fuse, using carbon, nitrogen, and oxygen isotopes as catalysts, each of which is consumed at one step of the CNO cycle, but re-generated in a later step. The end product is one alpha particle (a stable helium nucleus), two positrons, and two electron neutrinos.
There are various alternative paths and catalysts involved in the CNO cycles, but all these cycles have the same net result:
4 + 2
→ + +
→ + +
The positrons will almost instantly annihilate with electrons, releasing energy in the form of gamma rays. The neutrinos escape from the star carrying away some energy. One nucleus goes on to become carbon, nitrogen, and oxygen isotopes through a number of transformations in a repeating cycle.
The proton–proton chain is more prominent in stars the mass of the Sun or less. This difference stems from temperature dependency differences between the two reactions; pp-chain reaction starts at temperatures around (4 megakelvin), making it the dominant energy source in smaller stars. A self-maintaining CNO chain starts at approximately , but its energy output rises much more rapidly with increasing temperatures so that it becomes the dominant source of energy at approximately .
The Sun has a core temperature of around , and only of nuclei produced in the Sun are
born in the CNO cycle.
The CNO-I process was independently proposed by Carl von Weizsäcker and Hans Bethe in the late 1930s.
The first reports of the experimental detection of the neutrinos produced by the CNO cycle in the Sun were published in 2020 by the BOREXINO collaboration. This was also the first experimental confirmation that the Sun had a CNO cycle, that the proposed magnitude of the cycle was accurate, and that von Weizsäcker and Bethe were correct.
Cold CNO cycles
Under typical conditions found in stars, catalytic hydrogen burning by the CNO cycles is limited by proton captures. Specifically, the timescale for beta decay of the radioactive nuclei produced is faster than the timescale for fusion. Because of the long timescales involved, the cold CNO cycles convert hydrogen to helium slowly, allowing them to power stars in quiescent equilibrium for many years.
CNO-I
The first proposed catalytic cycle for the conversion of hydrogen into helium was initially called the carbon–nitrogen cycle (CN-cycle), also referred to as the Bethe–Weizsäcker cycle in honor of the independent work of Carl Friedrich von Weizsäcker in 1937–38 and Hans Bethe. Bethe's 1939 papers on the CN-cycle drew on three earlier papers written in collaboration with Robert Bacher and Milton Stanley Livingston and which came to be known informally as Bethe's Bible. It was considered the standard work on nuclear physics for many years and was a significant factor in his being awarded the 1967 Nobel Prize in Physics. Bethe's original calculations suggested the CN-cycle was the Sun's primary source of energy. This conclusion arose from a belief that is now known to be mistaken, that the abundance of nitrogen in the sun is approximately 10%; it is actually less than half a percent. The CN-cycle, named as it contains no stable isotope of oxygen, involves the following cycle of transformations:
→ → → → → →
This cycle is now understood as being the first part of a larger process, the CNO-cycle, and the main reactions in this part of the cycle (CNO-I) are:
{| border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ || ||(half-life of 9.965 minutes)
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 122.24 seconds)
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|}
where the carbon-12 nucleus used in the first reaction is regenerated in the last reaction. After the two positrons emitted annihilate with two ambient electrons producing an additional , the total energy released in one cycle is 26.73 MeV; in some texts, authors are erroneously including the positron annihilation energy in with the beta-decay Q-value and then neglecting the equal amount of energy released by annihilation, leading to possible confusion. All values are calculated with reference to the Atomic Mass Evaluation 2003.
The limiting (slowest) reaction in the CNO-I cycle is the proton capture on . In 2006 it was experimentally measured down to stellar energies, revising the calculated age of globular clusters by around 1 billion years.
The neutrinos emitted in beta decay will have a spectrum of energy ranges, because although momentum is conserved, the momentum can be shared in any way between the positron and neutrino, with either emitted at rest and the other taking away the full energy, or anything in between, so long as all the energy from the Q-value is used. The total momentum received by the positron and the neutrino is not great enough to cause a significant recoil of the much heavier daughter nucleus and hence, its contribution to kinetic energy of the products, for the precision of values given here, can be neglected. Thus the neutrino emitted during the decay of nitrogen-13 can have an energy from zero up to , and the neutrino emitted during the decay of oxygen-15 can have an energy from zero up to . On average, about 1.7 MeV of the total energy output is taken away by neutrinos for each loop of the cycle, leaving about available for producing luminosity.
CNO-II
In a minor branch of the above reaction, occurring in the Sun's core 0.04% of the time, the final reaction involving shown above does not produce carbon-12 and an alpha particle, but instead produces oxygen-16 and a photon and continues
In detail:
:{| border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 64.49 seconds)
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 122.24 seconds)
|}
Like the carbon, nitrogen, and oxygen involved in the main branch, the fluorine produced in the minor branch is merely an intermediate product; at steady state, it does not accumulate in the star.
CNO-III
This subdominant branch is significant only for massive stars. The reactions are started when one of the reactions in CNO-II results in fluorine-18 and a photon instead of nitrogen-14 and an alpha particle, and continues
→ → → → → →
In detail:
{| border="0"
|- style="height:2em;"
| || + || || → || || + || || || || + ||
|- style="height:2em;"
| || || || → || || + || || + || || + || || (half-life of )
|- style="height:2em;"
| || + || || → || || + || || || || + ||
|- style="height:2em;"
| || + || || → || || + || || || || + ||
|- style="height:2em;"
| || + || || → || || + || || || || + ||
|- style="height:2em;"
| || || || → || || + || || + || || + || || (half-life of )
|}
CNO-IV
Like the CNO-III, this branch is also only significant in massive stars. The reactions are started when one of the reactions in CNO-III results in fluorine-19 and a photon instead of nitrogen-15 and an alpha particle, and continues
In detail:
{| border="0"
| || + || ||→ || || + || || || || + ||
|-
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 64.49 seconds)
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 109.771 minutes)
|- style="height:2em;"
|}
In some instances can combine with a helium nucleus to start a sodium-neon cycle.
Hot CNO cycles
Under conditions of higher temperature and pressure, such as those found in novae and X-ray bursts, the rate of proton captures exceeds the rate of beta-decay, pushing the burning to the proton drip line. The essential idea is that a radioactive species will capture a proton before it can beta decay, opening new nuclear burning pathways that are otherwise inaccessible. Because of the higher temperatures involved, these catalytic cycles are typically referred to as the hot CNO cycles; because the timescales are limited by beta decays instead of proton captures, they are also called the beta-limited CNO cycles.
HCNO-I
The difference between the CNO-I cycle and the HCNO-I cycle is that captures a proton instead of decaying, leading to the total sequence
→→→→→→
In detail:
{| border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 70.641 seconds)
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 122.24 seconds)
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|}
HCNO-II
The notable difference between the CNO-II cycle and the HCNO-II cycle is that captures a proton instead of decaying, and neon is produced in a subsequent reaction on , leading to the total sequence
→→→→→→
In detail:
{| border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 1.672 seconds)
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 122.24 seconds)
|}
HCNO-III
An alternative to the HCNO-II cycle is that captures a proton moving towards higher mass and using the same helium production mechanism as the CNO-IV cycle as
→→→→→→
In detail:
{| border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 17.22 seconds)
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||+ ||
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||+ ||||(half-life of 1.672 seconds)
|}
Use in astronomy
While the total number of "catalytic" nuclei are conserved in the cycle, in stellar evolution the relative proportions of the nuclei are altered. When the cycle is run to equilibrium, the ratio of the carbon-12/carbon-13 nuclei is driven to 3.5, and nitrogen-14 becomes the most numerous nucleus, regardless of initial composition. During a star's evolution, convective mixing episodes moves material, within which the CNO cycle has operated, from the star's interior to the surface, altering the observed composition of the star. Red giant stars are observed to have lower carbon-12/carbon-13 and carbon-12/nitrogen-14 ratios than do main sequence stars, which is considered to be convincing evidence for the operation of the CNO cycle.
See also
Aneutronic fusion
Cold fusion
Fusion power
Nuclear fusion
Proton–proton chain, as found in stars like the Sun
Stellar nucleosynthesis, the whole topic
Triple-alpha process, how is produced from lighter nuclei
Footnotes
References
Further reading
Nuclear fusion reactions
Carbon
Nitrogen
Oxygen
Fluorine
Neon
Nucleosynthesis | CNO cycle | [
"Physics",
"Chemistry"
] | 3,692 | [
"Nuclear fission",
"Astrophysics",
"Nucleosynthesis",
"Nuclear fusion reactions",
"Nuclear physics",
"Nuclear fusion"
] |
6,085 | https://en.wikipedia.org/wiki/Cauchy%20sequence | In mathematics, a Cauchy sequence is a sequence whose elements become arbitrarily close to each other as the sequence progresses. More precisely, given any small positive distance, all excluding a finite number of elements of the sequence are less than that given distance from each other. Cauchy sequences are named after Augustin-Louis Cauchy; they may occasionally be known as fundamental sequences.
It is not sufficient for each term to become arbitrarily close to the term. For instance, in the sequence of square roots of natural numbers:
the consecutive terms become arbitrarily close to each other – their differences
tend to zero as the index grows. However, with growing values of , the terms become arbitrarily large. So, for any index and distance , there exists an index big enough such that As a result, no matter how far one goes, the remaining terms of the sequence never get close to ; hence the sequence is not Cauchy.
The utility of Cauchy sequences lies in the fact that in a complete metric space (one where all such sequences are known to converge to a limit), the criterion for convergence depends only on the terms of the sequence itself, as opposed to the definition of convergence, which uses the limit value as well as the terms. This is often exploited in algorithms, both theoretical and applied, where an iterative process can be shown relatively easily to produce a Cauchy sequence, consisting of the iterates, thus fulfilling a logical condition, such as termination.
Generalizations of Cauchy sequences in more abstract uniform spaces exist in the form of Cauchy filters and Cauchy nets.
In real numbers
A sequence
of real numbers is called a Cauchy sequence if for every positive real number there is a positive integer N such that for all natural numbers
where the vertical bars denote the absolute value. In a similar way one can define Cauchy sequences of rational or complex numbers. Cauchy formulated such a condition by requiring to be infinitesimal for every pair of infinite m, n.
For any real number r, the sequence of truncated decimal expansions of r forms a Cauchy sequence. For example, when this sequence is (3, 3.1, 3.14, 3.141, ...). The mth and nth terms differ by at most when m < n, and as m grows this becomes smaller than any fixed positive number
Modulus of Cauchy convergence
If is a sequence in the set then a modulus of Cauchy convergence for the sequence is a function from the set of natural numbers to itself, such that for all natural numbers and natural numbers
Any sequence with a modulus of Cauchy convergence is a Cauchy sequence. The existence of a modulus for a Cauchy sequence follows from the well-ordering property of the natural numbers (let be the smallest possible in the definition of Cauchy sequence, taking to be ). The existence of a modulus also follows from the principle of countable choice. Regular Cauchy sequences are sequences with a given modulus of Cauchy convergence (usually or ). Any Cauchy sequence with a modulus of Cauchy convergence is equivalent to a regular Cauchy sequence; this can be proven without using any form of the axiom of choice.
Moduli of Cauchy convergence are used by constructive mathematicians who do not wish to use any form of choice. Using a modulus of Cauchy convergence can simplify both definitions and theorems in constructive analysis. Regular Cauchy sequences were used by and by in constructive mathematics textbooks.
In a metric space
Since the definition of a Cauchy sequence only involves metric concepts, it is straightforward to generalize it to any metric space X.
To do so, the absolute value is replaced by the distance (where d denotes a metric) between and
Formally, given a metric space a sequence of elements of
is Cauchy, if for every positive real number there is a positive integer such that for all positive integers the distance
Roughly speaking, the terms of the sequence are getting closer and closer together in a way that suggests that the sequence ought to have a limit in X.
Nonetheless, such a limit does not always exist within X: the property of a space that every Cauchy sequence converges in the space is called completeness, and is detailed below.
Completeness
A metric space (X, d) in which every Cauchy sequence converges to an element of X is called complete.
Examples
The real numbers are complete under the metric induced by the usual absolute value, and one of the standard constructions of the real numbers involves Cauchy sequences of rational numbers. In this construction, each equivalence class of Cauchy sequences of rational numbers with a certain tail behavior—that is, each class of sequences that get arbitrarily close to one another— is a real number.
A rather different type of example is afforded by a metric space X which has the discrete metric (where any two distinct points are at distance 1 from each other). Any Cauchy sequence of elements of X must be constant beyond some fixed point, and converges to the eventually repeating term.
Non-example: rational numbers
The rational numbers are not complete (for the usual distance):
There are sequences of rationals that converge (in ) to irrational numbers; these are Cauchy sequences having no limit in In fact, if a real number x is irrational, then the sequence (xn), whose n-th term is the truncation to n decimal places of the decimal expansion of x, gives a Cauchy sequence of rational numbers with irrational limit x. Irrational numbers certainly exist in for example:
The sequence defined by consists of rational numbers (1, 3/2, 17/12,...), which is clear from the definition; however it converges to the irrational square root of 2, see Babylonian method of computing square root.
The sequence of ratios of consecutive Fibonacci numbers which, if it converges at all, converges to a limit satisfying and no rational number has this property. If one considers this as a sequence of real numbers, however, it converges to the real number the Golden ratio, which is irrational.
The values of the exponential, sine and cosine functions, exp(x), sin(x), cos(x), are known to be irrational for any rational value of but each can be defined as the limit of a rational Cauchy sequence, using, for instance, the Maclaurin series.
Non-example: open interval
The open interval in the set of real numbers with an ordinary distance in is not a complete space: there is a sequence in it, which is Cauchy (for arbitrarily small distance bound all terms of fit in the interval), however does not converge in — its 'limit', number 0, does not belong to the space
Other properties
Every convergent sequence (with limit s, say) is a Cauchy sequence, since, given any real number beyond some fixed point, every term of the sequence is within distance of s, so any two terms of the sequence are within distance of each other.
In any metric space, a Cauchy sequence is bounded (since for some N, all terms of the sequence from the N-th onwards are within distance 1 of each other, and if M is the largest distance between and any terms up to the N-th, then no term of the sequence has distance greater than from ).
In any metric space, a Cauchy sequence which has a convergent subsequence with limit s is itself convergent (with the same limit), since, given any real number r > 0, beyond some fixed point in the original sequence, every term of the subsequence is within distance r/2 of s, and any two terms of the original sequence are within distance r/2 of each other, so every term of the original sequence is within distance r of s.
These last two properties, together with the Bolzano–Weierstrass theorem, yield one standard proof of the completeness of the real numbers, closely related to both the Bolzano–Weierstrass theorem and the Heine–Borel theorem. Every Cauchy sequence of real numbers is bounded, hence by Bolzano–Weierstrass has a convergent subsequence, hence is itself convergent. This proof of the completeness of the real numbers implicitly makes use of the least upper bound axiom. The alternative approach, mentioned above, of the real numbers as the completion of the rational numbers, makes the completeness of the real numbers tautological.
One of the standard illustrations of the advantage of being able to work with Cauchy sequences and make use of completeness is provided by consideration of the summation of an infinite series of real numbers
(or, more generally, of elements of any complete normed linear space, or Banach space). Such a series
is considered to be convergent if and only if the sequence of partial sums is convergent, where It is a routine matter to determine whether the sequence of partial sums is Cauchy or not, since for positive integers
If is a uniformly continuous map between the metric spaces M and N and (xn) is a Cauchy sequence in M, then is a Cauchy sequence in N. If and are two Cauchy sequences in the rational, real or complex numbers, then the sum and the product are also Cauchy sequences.
Generalizations
In topological vector spaces
There is also a concept of Cauchy sequence for a topological vector space : Pick a local base for about 0; then () is a Cauchy sequence if for each member there is some number such that whenever
is an element of If the topology of is compatible with a translation-invariant metric the two definitions agree.
In topological groups
Since the topological vector space definition of Cauchy sequence requires only that there be a continuous "subtraction" operation, it can just as well be stated in the context of a topological group: A sequence in a topological group is a Cauchy sequence if for every open neighbourhood of the identity in there exists some number such that whenever it follows that As above, it is sufficient to check this for the neighbourhoods in any local base of the identity in
As in the construction of the completion of a metric space, one can furthermore define the binary relation on Cauchy sequences in that and are equivalent if for every open neighbourhood of the identity in there exists some number such that whenever it follows that This relation is an equivalence relation: It is reflexive since the sequences are Cauchy sequences. It is symmetric since which by continuity of the inverse is another open neighbourhood of the identity. It is transitive since where and are open neighbourhoods of the identity such that ; such pairs exist by the continuity of the group operation.
In groups
There is also a concept of Cauchy sequence in a group :
Let be a decreasing sequence of normal subgroups of of finite index.
Then a sequence in is said to be Cauchy (with respect to ) if and only if for any there is such that for all
Technically, this is the same thing as a topological group Cauchy sequence for a particular choice of topology on namely that for which is a local base.
The set of such Cauchy sequences forms a group (for the componentwise product), and the set of null sequences (sequences such that ) is a normal subgroup of The factor group is called the completion of with respect to
One can then show that this completion is isomorphic to the inverse limit of the sequence
An example of this construction familiar in number theory and algebraic geometry is the construction of the -adic completion of the integers with respect to a prime In this case, is the integers under addition, and is the additive subgroup consisting of integer multiples of
If is a cofinal sequence (that is, any normal subgroup of finite index contains some ), then this completion is canonical in the sense that it is isomorphic to the inverse limit of where varies over normal subgroups of finite index. For further details, see Ch. I.10 in Lang's "Algebra".
In a hyperreal continuum
A real sequence has a natural hyperreal extension, defined for hypernatural values H of the index n in addition to the usual natural n. The sequence is Cauchy if and only if for every infinite H and K, the values and are infinitely close, or adequal, that is,
where "st" is the standard part function.
Cauchy completion of categories
introduced a notion of Cauchy completion of a category. Applied to (the category whose objects are rational numbers, and there is a morphism from x to y if and only if ), this Cauchy completion yields (again interpreted as a category using its natural ordering).
See also
References
Further reading
(for uses in constructive mathematics)
External links
Augustin-Louis Cauchy
Metric geometry
Topology
Abstract algebra
Sequences and series
Convergence (mathematics) | Cauchy sequence | [
"Physics",
"Mathematics"
] | 2,687 | [
"Sequences and series",
"Mathematical analysis",
"Mathematical structures",
"Functions and mappings",
"Convergence (mathematics)",
"Abstract algebra",
"Mathematical objects",
"Topology",
"Space",
"Mathematical relations",
"Geometry",
"Spacetime",
"Algebra"
] |
6,088 | https://en.wikipedia.org/wiki/Common%20Era | Common Era (CE) and Before the Common Era (BCE) are year notations for the Gregorian calendar (and its predecessor, the Julian calendar), the world's most widely used calendar era. Common Era and Before the Common Era are alternatives to the original Anno Domini (AD) and Before Christ (BC) notations used for the same calendar era. The two notation systems are numerically equivalent: " CE" and "AD " each describe the current year; "400 BCE" and "400 BC" are the same year.
The expression can be traced back to 1615, when it first appears in a book by Johannes Kepler as the (), and to 1635 in English as "Vulgar Era". The term "Common Era" can be found in English as early as 1708, and became more widely used in the mid-19th century by Jewish religious scholars. Since the late 20th century, BCE and CE have become popular in academic and scientific publications on the grounds that BCE and CE are religiously neutral terms. They have been promoted as more sensitive to non-Christians by not referring to Jesus, the central figure of Christianity, especially via the religious terms "Christ" and ("Lord") used by the other abbreviations. Nevertheless, its epoch remains the same as that used for the Anno Domini era.
History
Origins
The idea of numbering years beginning from the date that he believed to be the date of birth of Jesus, was conceived around the year 525 by the Christian monk Dionysius Exiguus. He did this to replace the then dominant Era of Martyrs system, because he did not wish to continue the memory of a tyrant who persecuted Christians. He numbered years from an initial reference date ("epoch"), an event he referred to as the Incarnation of Jesus. Dionysius labeled the column of the table in which he introduced the new era as "Anni Domini Nostri Jesu Christi" (Of the year of our Lord Jesus Christ].
This way of numbering years became more widespread in Europe with its use by Bede in England in 731. Bede also introduced the practice of dating years before what he supposed was the year of birth of Jesus, without a year zero. In 1422, Portugal became the last Western European country to switch to the system begun by Dionysius.
Vulgar Era
The term "Common Era" is traced back in English to its appearance as "Vulgar Era" to distinguish years of the Anno Domini era, which was in popular use, from dates of the regnal year (the year of the reign of a sovereign) typically used in national law.
(The word 'vulgar' originally meant 'of the ordinary people', with no derogatory associations.)
The first use of the Latin term may be that in a 1615 book by Johannes Kepler. Kepler uses it again, as , in a 1616 table of ephemerides, and again, as , in 1617. A 1635 English edition of that book has the title page in English that may be the earliest-found use of Vulgar Era in English. A 1701 book edited by John Le Clerc includes the phrase "Before Christ according to the Vulgar Æra,6".
The Merriam Webster Dictionary gives 1716 as the date of first use of the term "vulgar era" (which it defines as Christian era).
The first published use of "Christian Era" may be the Latin phrase on the title page of a 1584 theology book, . In 1649, the Latin phrase appeared in the title of an English almanac. A 1652 ephemeris may be the first instance found so far of the English use of "Christian Era".
The English phrase "Common Era" appears at least as early as 1708,
and in a 1715 book on astronomy it is used interchangeably with "Christian Era" and "Vulgar Era". A 1759 history book uses common æra in a generic sense, to refer to "the common era of the Jews". The first use of the phrase "before the common era" may be that in a 1770 work that also uses common era and vulgar era as synonyms, in a translation of a book originally written in German. The 1797 edition of the Encyclopædia Britannica uses the terms vulgar era and common era synonymously.
In 1835, in his book Living Oracles, Alexander Campbell, wrote: "The vulgar Era, or Anno Domini; the fourth year of Jesus Christ, the first of which was but eight days",
and also refers to the common era as a synonym for vulgar era with "the fact that our Lord was born on the 4th year before the vulgar era, called Anno Domini, thus making (for example) the 42d year from his birth to correspond with the 38th of the common era". The Catholic Encyclopedia (1909) in at least one article reports all three terms (Christian, Vulgar, Common Era) being commonly understood by the early 20th century.
The phrase "common era", in lower case, also appeared in the 19th century in a "generic" sense, not necessarily to refer to the Christian Era, but to any system of dates in common use throughout a civilization. Thus, "the common era of the Jews", "the common era of the Mahometans", "common era of the world", "the common era of the foundation of Rome".
When it did refer to the Christian Era, it was sometimes qualified, e.g., "common era of the Incarnation", "common era of the Nativity", or "common era of the birth of Christ".
An adapted translation of Common Era into Latin as was adopted in the 20th century by some followers of Aleister Crowley, and thus the abbreviation "e.v." or "EV" may sometimes be seen as a replacement for AD.
History of the use of the CE/BCE abbreviation
Although Jews have their own Hebrew calendar, they often use the Gregorian calendar without the AD prefix. As early as 1825, the abbreviation VE (for Vulgar Era) was in use among Jews to denote years in the Western calendar. , Common Era notation has also been in use for Hebrew lessons for more than a century. Jews have also used the term Current Era.
Contemporary usage
Some academics in the fields of theology, education, archaeology and history have adopted CE and BCE notation despite some disagreement. A study conducted in 2014 found that the BCE/CE notation is not growing at the expense of BC and AD notation in the scholarly literature, and that both notations are used in a relatively stable fashion.
Australia
In 2011, media reports suggested that the BC/AD notation in Australian school textbooks would be replaced by BCE/CE notation. The change drew opposition from some politicians and church leaders. Weeks after the story broke, the Australian Curriculum, Assessment and Reporting Authority denied the rumours and stated that the BC/AD notation would remain, with CE and BCE as an optional suggested learning activity.
Canada
In 2013, the Canadian Museum of Civilization (now the Canadian Museum of History) in Gatineau (opposite Ottawa), which had previously switched to BCE/CE, decided to change back to BC/AD in material intended for the public while retaining BCE/CE in academic content.
Nepal
The notation is in particularly common use in Nepal in order to disambiguate dates from the local calendar, Bikram or Vikram Sambat. Disambiguation is needed because the era of the local calendar is quite close to the Common Era.
United Kingdom
In 2002, an advisory panel for the religious education syllabus for England and Wales recommended introducing BCE/CE dates to schools, and by 2018 some local education authorities were using them.
In 2018, the National Trust said it would continue to use BC/AD as its house style. English Heritage explains its era policy thus: "It might seem strange to use a Christian calendar system when referring to British prehistory, but the BC/AD labels are widely used and understood." Some parts of the BBC use BCE/CE, but some presenters have said they will not. As of October 2019, the BBC News style guide has entries for AD and BC, but not for CE or BCE. The style guide for The Guardian says, under the entry for CE/BCE: "some people prefer CE (common era, current era, or Christian era) and BCE (before common era, etc.) to AD and BC, which, however, remain our style".
United States
In the United States, the use of the BCE/CE notation in textbooks was reported in 2005 to be growing. Some publications have transitioned to using it exclusively. For example, the 2007 World Almanac was the first edition to switch to BCE/CE, ending a period of 138 years in which the traditional BC/AD dating notation was used. BCE/CE is used by the College Board in its history tests, and by the Norton Anthology of English Literature. Others have taken a different approach. The US-based History Channel uses BCE/CE notation in articles on non-Christian religious topics such as Jerusalem and Judaism. The 2006 style guide for the Episcopal Diocese Maryland Church News says that BCE and CE should be used.
In June 2006, in the United States, the Kentucky State School Board reversed its decision to use BCE and CE in the state's new Program of Studies, leaving education of students about these concepts a matter of local discretion.
Rationales
Support
The use of CE in Jewish scholarship was historically motivated by the desire to avoid the implicit "Our Lord" in the abbreviation AD. Although other aspects of dating systems are based in Christian origins, AD is a direct reference to Jesus as Lord. Proponents of the Common Era notation assert that the use of BCE/CE shows sensitivity to those who use the same year numbering system as the one that originated with and is currently used by Christians, but who are not themselves Christian. Former United Nations Secretary-General Kofi Annan has argued:
Adena K. Berkowitz, in her application to argue before the United States Supreme Court, opted to use BCE and CE because, "Given the multicultural society that we live in, the traditional Jewish designationsB.C.E. and C.E. cast a wider net of inclusion." In the World History Encyclopedia, Joshua J. Mark wrote "Non-Christian scholars, especially, embraced [CE and BCE] because they could now communicate more easily with the Christian community. Jewish, Islamic, Hindu and Buddhist scholars could retain their [own] calendar but refer to events using the Gregorian Calendar as BCE and CE without compromising their own beliefs about the divinity of Jesus of Nazareth." In History Today, Michael Ostling wrote: "BC/AD Dating: In the year of whose Lord? The continuing use of AD and BC is not only factually wrong but also offensive to many who are not Christians."
Opposition
Critics note the fact that there is no difference in the epoch of the two systems—chosen to be close to the date of birth of Jesus. Since the year numbers are the same, BCE and CE dates should be equally offensive to other religions as BC and AD. Roman Catholic priest and writer on interfaith issues Raimon Panikkar argued that the BCE/CE usage is the less inclusive option since they are still using the Christian calendar numbers and forcing it on other nations. In 1993, the English-language expert Kenneth G. Wilson speculated a slippery slope scenario in his style guide that, "if we do end by casting aside the AD/BC convention, almost certainly some will argue that we ought to cast aside as well the conventional numbering system [that is, the method of numbering years] itself, given its Christian basis."
Some Christians are offended by the removal of the reference to Jesus, including the Southern Baptist Convention.
Conventions in style guides
The abbreviation BCE, just as with BC, always follows the year number. Unlike AD, which still often precedes the year number, CE always follows the year number (if context requires that it be written at all). Thus, the current year is written as in both notations (or, if further clarity is needed, as CE, or as AD ), and the year that Socrates died is represented as 399 BCE (the same year that is represented by 399 BC in the BC/AD notation). The abbreviations are sometimes written with small capital letters, or with periods (e.g., "B.C.E." or "C.E."). The US-based Society of Biblical Literature style guide for academic texts on religion prefers BCE/CE to BC/AD.
Similar conventions in other languages
In Germany, Jews in Berlin seem to have already been using words translating to "(before the) common era" in the 18th century, while others like Moses Mendelssohn opposed this usage as it would hinder the integration of Jews into German society. The formulation seems to have persisted among German Jews in the 19th century in forms like (before the common chronology). In 1938 Nazi Germany, the use of this convention was also prescribed by the National Socialist Teachers League. However, it was soon discovered that many German Jews had been using the convention ever since the 18th century, and Time magazine found it ironic to see "Aryans following Jewish example nearly 200 years later".
In Spanish, common forms used for "BC" are and (for "", "before Christ"), with variations in punctuation and sometimes the use of () instead of . The also acknowledges the use of () and (). In scholarly writing, is the equivalent of the English "BCE", "" or "Before the Common Era".
In Welsh, OC can be expanded to equivalents of both AD () and CE (); for dates before the Common Era, CC (traditionally, ) is used exclusively, as would abbreviate to a mild obscenity.
In Russian since the October Revolution (1917) , lit. before our era) and lit. of our era) are used almost universally. Within Christian churches , i.e. before/after the birth of Christ, equivalent to ) remains in use.
In Polish, "p.n.e." (, lit. before our era) and "n.e." (, lit. of our era) are commonly used in historical and scientific literature. (before Christ) and (after Christ) see sporadic usage, mostly in religious publications.
In China, upon the foundation of the Republic of China, the Government in Nanking adopted the Republic of China calendar with 1912 designated as year 1, but used the Western calendar for international purposes. The translated term was (, "Western Era"), which is still used in Taiwan in formal documents. In 1949, the People's Republic of China adopted (, "Common Era") for both internal and external affairs in mainland China. This notation was extended to Hong Kong in 1997 and Macau in 1999 (de facto extended in 1966) through Annex III of Hong Kong Basic Law and Macau Basic Law, thus eliminating the ROC calendar in these areas. BCE is translated into Chinese as (, "Before the Common Era").
In Czech, the "n. l." ( which translates as of our year count) and "př. n. l." or "před n. l." ( meaning before our year count) is used, always after the year number. The direct translation of AD (, abbreviated as L. P.) or BC (, abbreviated as př. Kr.) is seen as archaic.
In Croatian the common form used for BC and AD are pr. Kr. (prije Krista, "before Christ") and p. Kr. (poslije Krista, after Christ). The abbreviations pr. n. e. (prije nove ere, before new era) and n. e. (nove ere, (of the) new era) have also recently been introduced.
In Danish, "f.v.t." (, before our time reckoning) and "e.v.t." (, after our time reckoning) are used as BCE/CE are in English. Also commonly used are "f.Kr." (, before Christ) and "e.Kr." (, after Christ), which are both placed after the year number in contrast with BC/AD in English.
In Macedonian, the terms "п.н.е." (пред нашата ера "before our era") and "н.е." (наша ера "our era") are used in every aspect.
In Estonian, "e.m.a." (, before our time reckoning) and "m.a.j." (, according to our time reckoning) are used as BCE and CE, respectively. Also in use are terms "eKr" (, before Christ) and "pKr" (, after Christ). In all cases, the abbreviation is written after the year number.
In Finnish, "eaa." (, before time reckoning) and "jaa." (, after the start of time reckoning) are used as BCE and CE, respectively. Also (decreasingly) in use are terms "eKr", (, before Christ) and "jKr". (, after Christ). In all cases, the abbreviation is written after the year number.
See also
Astronomical year numbering
Before Present
Calendar
Calendar reform
Holocene Era
List of calendars
Explanatory notes
References
External links
1610s introductions
1615 beginnings
17th-century neologisms
Calendar eras
Chronology
Gregorian calendar
Linguistic controversies
Secularism and religions | Common Era | [
"Physics"
] | 3,651 | [
"Spacetime",
"Chronology",
"Physical quantities",
"Time"
] |
6,099 | https://en.wikipedia.org/wiki/Carboxylic%20acid | In organic chemistry, a carboxylic acid is an organic acid that contains a carboxyl group () attached to an R-group. The general formula of a carboxylic acid is often written as or , sometimes as with R referring to an organyl group (e.g., alkyl, alkenyl, aryl), or hydrogen, or other groups. Carboxylic acids occur widely. Important examples include the amino acids and fatty acids. Deprotonation of a carboxylic acid gives a carboxylate anion.
Examples and nomenclature
Carboxylic acids are commonly identified by their trivial names. They often have the suffix -ic acid. IUPAC-recommended names also exist; in this system, carboxylic acids have an -oic acid suffix. For example, butyric acid () is butanoic acid by IUPAC guidelines. For nomenclature of complex molecules containing a carboxylic acid, the carboxyl can be considered position one of the parent chain even if there are other substituents, such as 3-chloropropanoic acid. Alternately, it can be named as a "carboxy" or "carboxylic acid" substituent on another parent structure, such as 2-carboxyfuran.
The carboxylate anion ( or ) of a carboxylic acid is usually named with the suffix -ate, in keeping with the general pattern of -ic acid and -ate for a conjugate acid and its conjugate base, respectively. For example, the conjugate base of acetic acid is acetate.
Carbonic acid, which occurs in bicarbonate buffer systems in nature, is not generally classed as one of the carboxylic acids, despite that it has a moiety that looks like a COOH group.
Physical properties
Solubility
Carboxylic acids are polar. Because they are both hydrogen-bond acceptors (the carbonyl ) and hydrogen-bond donors (the hydroxyl ), they also participate in hydrogen bonding. Together, the hydroxyl and carbonyl group form the functional group carboxyl. Carboxylic acids usually exist as dimers in nonpolar media due to their tendency to "self-associate". Smaller carboxylic acids (1 to 5 carbons) are soluble in water, whereas bigger carboxylic acids have limited solubility due to the increasing hydrophobic nature of the alkyl chain. These longer chain acids tend to be soluble in less-polar solvents such as ethers and alcohols. Aqueous sodium hydroxide and carboxylic acids, even hydrophobic ones, react to yield water-soluble sodium salts. For example, enanthic acid has a low solubility in water (0.2 g/L), but its sodium salt is very soluble in water.
Boiling points
Carboxylic acids tend to have higher boiling points than water, because of their greater surface areas and their tendency to form stabilized dimers through hydrogen bonds. For boiling to occur, either the dimer bonds must be broken or the entire dimer arrangement must be vaporized, increasing the enthalpy of vaporization requirements significantly.
Acidity
Carboxylic acids are Brønsted–Lowry acids because they are proton (H+) donors. They are the most common type of organic acid.
Carboxylic acids are typically weak acids, meaning that they only partially dissociate into cations and anions in neutral aqueous solution. For example, at room temperature, in a 1-molar solution of acetic acid, only 0.001% of the acid are dissociated (i.e. 10−5 moles out of 1 mol). Electron-withdrawing substituents, such as -CF3 group, give stronger acids (the pKa of acetic acid is 4.76 whereas trifluoroacetic acid, with a trifluoromethyl substituent, has a pKa of 0.23). Electron-donating substituents give weaker acids (the pKa of formic acid is 3.75 whereas acetic acid, with a methyl substituent, has a pKa of 4.76)
Deprotonation of carboxylic acids gives carboxylate anions; these are resonance stabilized, because the negative charge is delocalized over the two oxygen atoms, increasing the stability of the anion. Each of the carbon–oxygen bonds in the carboxylate anion has a partial double-bond character. The carbonyl carbon's partial positive charge is also weakened by the -1/2 negative charges on the 2 oxygen atoms.
Odour
Carboxylic acids often have strong sour odours. Esters of carboxylic acids tend to have fruity, pleasant odours, and many are used in perfume.
Characterization
Carboxylic acids are readily identified as such by infrared spectroscopy. They exhibit a sharp band associated with vibration of the C=O carbonyl bond (νC=O) between 1680 and 1725 cm−1. A characteristic νO–H band appears as a broad peak in the 2500 to 3000 cm−1 region. By 1H NMR spectrometry, the hydroxyl hydrogen appears in the 10–13 ppm region, although it is often either broadened or not observed owing to exchange with traces of water.
Occurrence and applications
Many carboxylic acids are produced industrially on a large scale. They are also frequently found in nature. Esters of fatty acids are the main components of lipids and polyamides of aminocarboxylic acids are the main components of proteins.
Carboxylic acids are used in the production of polymers, pharmaceuticals, solvents, and food additives. Industrially important carboxylic acids include acetic acid (component of vinegar, precursor to solvents and coatings), acrylic and methacrylic acids (precursors to polymers, adhesives), adipic acid (polymers), citric acid (a flavor and preservative in food and beverages), ethylenediaminetetraacetic acid (chelating agent), fatty acids (coatings), maleic acid (polymers), propionic acid (food preservative), terephthalic acid (polymers). Important carboxylate salts are soaps.
Synthesis
Industrial routes
In general, industrial routes to carboxylic acids differ from those used on a smaller scale because they require specialized equipment.
Carbonylation of alcohols as illustrated by the Cativa process for the production of acetic acid. Formic acid is prepared by a different carbonylation pathway, also starting from methanol.
Oxidation of aldehydes with air using cobalt and manganese catalysts. The required aldehydes are readily obtained from alkenes by hydroformylation.
Oxidation of hydrocarbons using air. For simple alkanes, this method is inexpensive but not selective enough to be useful. Allylic and benzylic compounds undergo more selective oxidations. Alkyl groups on a benzene ring are oxidized to the carboxylic acid, regardless of its chain length. Benzoic acid from toluene, terephthalic acid from para-xylene, and phthalic acid from ortho-xylene are illustrative large-scale conversions. Acrylic acid is generated from propene.
Oxidation of ethene using silicotungstic acid catalyst.
Base-catalyzed dehydrogenation of alcohols.
Carbonylation coupled to the addition of water. This method is effective and versatile for alkenes that generate secondary and tertiary carbocations, e.g. isobutylene to pivalic acid. In the Koch reaction, the addition of water and carbon monoxide to alkenes or alkynes is catalyzed by strong acids. Hydrocarboxylations involve the simultaneous addition of water and CO. Such reactions are sometimes called "Reppe chemistry."
Hydrolysis of triglycerides obtained from plant or animal oils. These methods of synthesizing some long-chain carboxylic acids are related to soap making.
Fermentation of ethanol. This method is used in the production of vinegar.
The Kolbe–Schmitt reaction provides a route to salicylic acid, precursor to aspirin.
Laboratory methods
Preparative methods for small scale reactions for research or for production of fine chemicals often employ expensive consumable reagents.
Oxidation of primary alcohols or aldehydes with strong oxidants such as potassium dichromate, Jones reagent, potassium permanganate, or sodium chlorite. The method is more suitable for laboratory conditions than the industrial use of air, which is "greener" because it yields less inorganic side products such as chromium or manganese oxides.
Oxidative cleavage of olefins by ozonolysis, potassium permanganate, or potassium dichromate.
Hydrolysis of nitriles, esters, or amides, usually with acid- or base-catalysis.
Carbonation of a Grignard reagent and organolithium reagents:
Halogenation followed by hydrolysis of methyl ketones in the haloform reaction
Base-catalyzed cleavage of non-enolizable ketones, especially aryl ketones:
Less-common reactions
Many reactions produce carboxylic acids but are used only in specific cases or are mainly of academic interest.
Disproportionation of an aldehyde in the Cannizzaro reaction
Rearrangement of diketones in the benzilic acid rearrangement
Involving the generation of benzoic acids are the von Richter reaction from nitrobenzenes and the Kolbe–Schmitt reaction from phenols.
Reactions
Acid-base reactions
Carboxylic acids react with bases to form carboxylate salts, in which the hydrogen of the hydroxyl (–OH) group is replaced with a metal cation. For example, acetic acid found in vinegar reacts with sodium bicarbonate (baking soda) to form sodium acetate, carbon dioxide, and water:
Conversion to esters, amides, anhydrides
Widely practiced reactions convert carboxylic acids into esters, amides, carboxylate salts, acid chlorides, and alcohols.
Their conversion to esters is widely used, e.g. in the production of polyesters. Likewise, carboxylic acids are converted into amides, but this conversion typically does not occur by direct reaction of the carboxylic acid and the amine. Instead esters are typical precursors to amides. The conversion of amino acids into peptides is a significant biochemical process that requires ATP.
Converting a carboxylic acid to an amide is possible, but not straightforward. Instead of acting as a nucleophile, an amine will react as a base in the presence of a carboxylic acid to give the ammonium carboxylate salt. Heating the salt to above 100 °C will drive off water and lead to the formation of the amide. This method of synthesizing amides is industrially important, and has laboratory applications as well. In the presence of a strong acid catalyst, carboxylic acids can condense to form acid anhydrides. The condensation produces water, however, which can hydrolyze the anhydride back to the starting carboxylic acids. Thus, the formation of the anhydride via condensation is an equilibrium process.
Under acid-catalyzed conditions, carboxylic acids will react with alcohols to form esters via the Fischer esterification reaction, which is also an equilibrium process. Alternatively, diazomethane can be used to convert an acid to an ester. While esterification reactions with diazomethane often give quantitative yields, diazomethane is only useful for forming methyl esters.
Reduction
Like esters, most carboxylic acids can be reduced to alcohols by hydrogenation, or using hydride transferring agents such as lithium aluminium hydride. Strong alkyl transferring agents, such as organolithium compounds but not Grignard reagents, will reduce carboxylic acids to ketones along with transfer of the alkyl group.
The Vilsmaier reagent (N,N-Dimethyl(chloromethylene)ammonium chloride; ) is a highly chemoselective agent for carboxylic acid reduction. It selectively activates the carboxylic acid to give the carboxymethyleneammonium salt, which can be reduced by a mild reductant like lithium tris(t-butoxy)aluminum hydride to afford an aldehyde in a one pot procedure. This procedure is known to tolerate reactive carbonyl functionalities such as ketone as well as moderately reactive ester, olefin, nitrile, and halide moieties.
Conversion to acyl halides
The hydroxyl group on carboxylic acids may be replaced with a chlorine atom using thionyl chloride to give acyl chlorides. In nature, carboxylic acids are converted to thioesters. Thionyl chloride can be used to convert carboxylic acids to their corresponding acyl chlorides. First, carboxylic acid 1 attacks thionyl chloride, and chloride ion leaves. The resulting oxonium ion 2 is activated towards nucleophilic attack and has a good leaving group, setting it apart from a normal carboxylic acid. In the next step, 2 is attacked by chloride ion to give tetrahedral intermediate 3, a chlorosulfite. The tetrahedral intermediate collapses with the loss of sulfur dioxide and chloride ion, giving protonated acyl chloride 4. Chloride ion can remove the proton on the carbonyl group, giving the acyl chloride 5 with a loss of HCl.
Phosphorus(III) chloride (PCl3) and phosphorus(V) chloride (PCl5) will also convert carboxylic acids to acid chlorides, by a similar mechanism. One equivalent of PCl3 can react with three equivalents of acid, producing one equivalent of H3PO3, or phosphorus acid, in addition to the desired acid chloride. PCl5 reacts with carboxylic acids in a 1:1 ratio, and produces phosphorus(V) oxychloride (POCl3) and hydrogen chloride (HCl) as byproducts.
Reactions with carbanion equivalents
Carboxylic acids react with Grignard reagents and organolithiums to form ketones. The first equivalent of nucleophile acts as a base and deprotonates the acid. A second equivalent will attack the carbonyl group to create a geminal alkoxide dianion, which is protonated upon workup to give the hydrate of a ketone. Because most ketone hydrates are unstable relative to their corresponding ketones, the equilibrium between the two is shifted heavily in favor of the ketone. For example, the equilibrium constant for the formation of acetone hydrate from acetone is only 0.002. The carboxylic group is the most acidic in organic compounds.
Specialized reactions
As with all carbonyl compounds, the protons on the α-carbon are labile due to keto–enol tautomerization. Thus, the α-carbon is easily halogenated in the Hell–Volhard–Zelinsky halogenation.
The Schmidt reaction converts carboxylic acids to amines.
Carboxylic acids are decarboxylated in the Hunsdiecker reaction.
The Dakin–West reaction converts an amino acid to the corresponding amino ketone.
In the Barbier–Wieland degradation, a carboxylic acid on an aliphatic chain having a simple methylene bridge at the alpha position can have the chain shortened by one carbon. The inverse procedure is the Arndt–Eistert synthesis, where an acid is converted into acyl halide, which is then reacted with diazomethane to give one additional methylene in the aliphatic chain.
Many acids undergo oxidative decarboxylation. Enzymes that catalyze these reactions are known as carboxylases (EC 6.4.1) and decarboxylases (EC 4.1.1).
Carboxylic acids are reduced to aldehydes via the ester and DIBAL, via the acid chloride in the Rosenmund reduction and via the thioester in the Fukuyama reduction.
In ketonic decarboxylation carboxylic acids are converted to ketones.
Organolithium reagents (>2 equiv) react with carboxylic acids to give a dilithium 1,1-diolate, a stable tetrahedral intermediate which decomposes to give a ketone upon acidic workup.
The Kolbe electrolysis is an electrolytic, decarboxylative dimerization reaction. It gets rid of the carboxyl groups of two acid molecules, and joins the remaining fragments together.
Carboxyl radical
The carboxyl radical, •COOH, only exists briefly. The acid dissociation constant of •COOH has been measured using electron paramagnetic resonance spectroscopy. The carboxyl group tends to dimerise to form oxalic acid.
See also
Acid anhydride
Acid chloride
Amide
Amino acid
Ester
List of carboxylic acids
Dicarboxylic acid
Pseudoacid
Thiocarboxy
Carbon dioxide (CO2)
References
External links
Carboxylic acids pH and titration – freeware for calculations, data analysis, simulation, and distribution diagram generation
PHC.
Functional groups | Carboxylic acid | [
"Chemistry"
] | 3,778 | [
"Carboxylic acids",
"Functional groups"
] |
6,102 | https://en.wikipedia.org/wiki/Cyan | Cyan () is the color between blue and green on the visible spectrum of light. It is evoked by light with a predominant wavelength between 500 and 520 nm, between the wavelengths of green and blue.
In the subtractive color system, or CMYK color model, which can be overlaid to produce all colors in paint and color printing, cyan is one of the primary colors, along with magenta and yellow. In the additive color system, or RGB color model, used to create all the colors on a computer or television display, cyan is made by mixing equal amounts of green and blue light. Cyan is the complement of red; it can be made by the removal of red from white. Mixing red light and cyan light at the right intensity will make white light. It is commonly seen on a bright, sunny day in the sky.
Shades and variations
Different shades of cyan can vary in terms of hue, chroma (also known as saturation, intensity, or colorfulness), or lightness (or value, tone, or brightness), or any combination of these characteristics. Differences in value can also be referred to as tints and shades, with a tint being a cyan mixed with white, and a shade being mixed with black.
Color nomenclature is subjective. Many shades of cyan with a bluish hue are called blue. Similarly, those with a greenish hue are referred to as green. A cyan with a dark shade is commonly known as teal. A teal blue shade leans toward the blue end of the spectrum. Variations of teal with a greener tint are commonly referred to as teal green.
Turquoise, reminiscent of the stone with the same name, is a shade in the green spectrum of cyan hues. Celeste is a lightly tinted cyan that represents the color of a clear sky. Other colors in the cyan color range are electric blue, aquamarine, and others described as blue-green.
History
Cyan boasts a rich and diverse history, holding cultural significance for millennia. In ancient civilizations, turquoise, valued for its aesthetic appeal, served as a highly regarded precious gem. Turquoise comes in a variety of shades from green to blue, but cyan hues are particularly prevalent. Approximately 3,700 years ago, an intricately crafted dragon-shaped treasure made from over 2,000 pieces of turquoise and jade was created. This artifact is widely recognized as the oldest Chinese dragon totem by many Chinese scholars.
Turquoise jewelry also held significant importance among the Aztecs, who often featured this precious gemstone in vibrant frescoes for both symbolic and decorative purposes. The Aztecs revered turquoise, associating its color with the heavens and sacredness. Additionally, ancient Egyptians interpreted cyan hues as representing faith and truth, while Tibetans viewed them as a symbol of infinity.
After earlier uses in various contexts, cyan hues found increased use in diverse cultures due to their appealing aesthetic qualities in religious structures and art pieces. For example, the prominent dome of the Goharshad Mosque in Iran, built in 1418, showcases this trend. Additionally, Jacopo da Pontormo's use of a teal shade for Mary's robe in the 1528 painting Carmignano Visitation demonstrates the allure for these hues. During the 16th century, speakers of the English language began using the term turquoise to describe the cyan color of objects that resembled the color of the stone.
In the 1870s, the French sculptor Frédéric Bartholdi began the construction of what would later become the Statue of Liberty. Over time, exposure to the elements caused the copper structure to develop its distinctive patina, now recognized as iconic cyan. Following this, there was a significant advancement in the use of cyan during the late 19th and early 20th centuries.
Impressionist artists, such as Claude Monet in his renowned Water Lilies, effectively incorporated cyan hues into their works. Deviating from traditional interpretations of local color under neutral lighting conditions, the focus of artists was on accurately depicting perceived color and the influence of light on altering object hues. Specifically, daylight plays a significant role in shifting the perceived color of objects toward cyan hues. In 1917, the color term teal was introduced to describe deeper shades of cyan.
In the late 19th century, while traditional nomenclature of red, yellow, and blue persisted, the printing industry initiated a shift towards utilizing magenta and cyan inks for red and blue hues, respectively. This transition aimed to establish a more versatile color gamut achievable with only three primary colors. In 1949, a document in the printing industry stated: “The four-color set comprises Yellow, Red (magenta), Blue (cyan), Black”. This practice of labeling magenta, yellow, and cyan as red, yellow, and blue persisted until 1961. As the hues evolved, the printing industry maintained the use of the traditional terms red, yellow, and blue. Consequently, pinpointing the exact date of origin for CMYK, in which cyan serves as a primary color, proves challenging.
In August 1991, the HP Deskwriter 500C became the first Deskwriter to offer color printing as an option. It used interchangeable black and color (cyan, magenta, and yellow) inkjet print cartridges. With the inclusion of cyan ink in printers, the term "cyan" has become widely recognized in both home and office settings. According to TUP/Technology User Profile 2020, approximately 70% of online American adults regularly use a home printer.
Etymology and terminology
Its name is derived from the Ancient Greek word kyanos (κύανος), meaning "dark blue enamel, Lapis lazuli". It was formerly known as "cyan blue" or cyan-blue, and its first recorded use as a color name in English was in 1879. Further origins of the color name can be traced back to a dye produced from the cornflower (Centaurea cyanus).
In most languages, 'cyan' is not a basic color term and it phenomenologically appears as a greenish vibrant hue of blue to most English speakers. Other English terms for this "borderline" hue region include blue green, aqua, turquoise, teal, and grue.
On the web and in printing
Web colors cyan and aqua
The web color cyan shown at right is a secondary color in the RGB color model, which uses combinations of red, green and blue light to create all the colors on computer and television displays. In X11 colors, this color is called both cyan and aqua. In the HTML color list, this same color is called aqua.
The web colors are more vivid than the cyan used in the CMYK color system, and the web colors cannot be accurately reproduced on a printed page. To reproduce the web color cyan in inks, it is necessary to add some white ink to the printer's cyan below, so when it is reproduced in printing, it is not a primary subtractive color. It is called aqua (a name in use since 1598) because it is a color commonly associated with water, such as the appearance of the water at a tropical beach.
Process cyan
Cyan is also one of the common inks used in four-color printing, along with magenta, yellow, and black; this set of colors is referred to as CMYK. In printing, the cyan ink is sometimes known as printer's cyan, process cyan, or process blue.
While both the additive secondary and the subtractive primary are called cyan, they can be substantially different from one another. Cyan printing ink is typically more saturated than the RGB secondary cyan, depending on what RGB color space and ink are considered. That is, process cyan is usually outside the RGB gamut, and there is no fixed conversion from CMYK primaries to RGB. Different formulations are used for printer's ink, so there can be variations in the printed color that is pure cyan ink. This is because real-world subtractive (unlike additive) color mixing does not consistently produce the same result when mixing apparently identical colors, since the specific frequencies filtered out to produce that color affect how it interacts with other colors. Phthalocyanine blue is one such commonly used pigment. A typical formulation of process cyan is shown in the color box on the right.
In science and nature
Color of water
Pure water is nearly colorless. However, it does absorb slightly more red light than blue, giving significant volumes of water a bluish tint; increased scattering of blue light due to fine particles in the water shifts the blue color toward green, for a typically cyan net color.
Cyan and cyanide
Cyanide derives its name from Prussian blue, a blue pigment containing the cyanide ion.
Bacteria
Cyanobacteria (sometimes called blue-green algae) are an important link in the food chain.
Astronomy
The planet Uranus is colored cyan because of the abundance of methane in its atmosphere. Methane absorbs red light and reflects the blue-green light which allows observers to see it as cyan.
Energy
Natural gas (methane), used by many for home cooking on gas stoves, has a cyan colored flame when burned with a mixture of air.
Photography and film
Cyanotype, or blueprint, a monochrome photographic printing process that predates the use of the word cyan as a color, yields a deep cyan-blue colored print based on the Prussian blue pigment.
Cinecolor, a bi-pack color process, the photographer would load a standard camera with two films, one orthochromatic, dyed red, and a panchromatic strip behind it. Color light would expose the cyan record on the ortho stock, which also acted as a filter, exposing only red light to the panchromatic film stock.
Medicine
Cyanosis is an abnormal blueness of the skin, usually a sign of poor oxygen intake; patients are typically described as being "cyanotic".
Cyanopsia is a color vision defect where vision is tinted blue. This can be a drug-induced side effect or experienced after cataract removal.
Gallery
See also
Blue–green distinction in language
Shades of cyan
Lists of colors
References
Primary colors
Secondary colors
Optical spectrum
Shades of blue
Shades of green
Rainbow colors
Tertiary colors | Cyan | [
"Physics"
] | 2,146 | [
"Optical spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
6,111 | https://en.wikipedia.org/wiki/Chemical%20vapor%20deposition | Chemical vapor deposition (CVD) is a vacuum deposition method used to produce high-quality, and high-performance, solid materials. The process is often used in the semiconductor industry to produce thin films.
In typical CVD, the wafer (substrate) is exposed to one or more volatile precursors, which react and/or decompose on the substrate surface to produce the desired deposit. Frequently, volatile by-products are also produced, which are removed by gas flow through the reaction chamber.
Microfabrication processes widely use CVD to deposit materials in various forms, including: monocrystalline, polycrystalline, amorphous, and epitaxial. These materials include: silicon (dioxide, carbide, nitride, oxynitride), carbon (fiber, nanofibers, nanotubes, diamond and graphene), fluorocarbons, filaments, tungsten, titanium nitride and various high-κ dielectrics.
The term chemical vapour deposition was coined in 1960 by John M. Blocher, Jr. who intended to differentiate chemical from physical vapour deposition (PVD).
Types
CVD is practiced in a variety of formats. These processes generally differ in the means by which chemical reactions are initiated.
Classified by operating conditions:
Atmospheric pressure CVD (APCVD) – CVD at atmospheric pressure.
Low-pressure CVD (LPCVD) – CVD at sub-atmospheric pressures. Many journal articles and commercial tools use the term reduced pressure CVD (RPCVD) especially for single wafer tools in place of LPCVD which dominates for multi-wafer furnace tube tools. Reduced pressures tend to reduce unwanted gas-phase reactions and improve film uniformity across the wafer.
Ultrahigh vacuum CVD (UHVCVD) – CVD at very low pressure, typically below 10−6 Pa (≈ 10−8 torr). Note that in other fields, a lower division between high and ultra-high vacuum is common, often 10−7 Pa.
Sub-atmospheric CVD (SACVD) – CVD at sub-atmospheric pressures. Uses tetraethyl orthosilicate (TEOS) and ozone to fill high aspect ratio Si structures with silicon dioxide (SiO2).
Most modern CVD is either LPCVD or UHVCVD.
Classified by physical characteristics of vapor:
Aerosol assisted CVD (AACVD) – CVD in which the precursors are transported to the substrate by means of a liquid/gas aerosol, which can be generated ultrasonically. This technique is suitable for use with non-volatile precursors.
Direct liquid injection CVD (DLICVD) – CVD in which the precursors are in liquid form (liquid or solid dissolved in a convenient solvent). Liquid solutions are injected in a vaporization chamber towards injectors (typically car injectors). The precursor vapors are then transported to the substrate as in classical CVD. This technique is suitable for use on liquid or solid precursors. High growth rates can be reached using this technique.
Classified by type of substrate heating:
Hot wall CVD – CVD in which the chamber is heated by an external power source and the substrate is heated by radiation from the heated chamber walls.
Cold wall CVD – CVD in which only the substrate is directly heated either by induction or by passing current through the substrate itself or a heater in contact with the substrate. The chamber walls are at room temperature.
Plasma methods (see also Plasma processing):
Microwave plasma-assisted CVD (MPCVD)
Plasma-enhanced CVD (PECVD) – CVD that utilizes plasma to enhance chemical reaction rates of the precursors. PECVD processing allows deposition at lower temperatures, which is often critical in the manufacture of semiconductors. The lower temperatures also allow for the deposition of organic coatings, such as plasma polymers, that have been used for nanoparticle surface functionalization.
Remote plasma-enhanced CVD (RPECVD) – Similar to PECVD except that the wafer substrate is not directly in the plasma discharge region. Removing the wafer from the plasma region allows processing temperatures down to room temperature.
Low-energy plasma-enhanced chemical vapor deposition (LEPECVD) - CVD employing a high density, low energy plasma to obtain epitaxial deposition of semiconductor materials at high rates and low temperatures.
Atomic-layer CVD (ALCVD) – Deposits successive layers of different substances to produce layered, crystalline films. See Atomic layer epitaxy.
Combustion chemical vapor deposition (CCVD) – Combustion Chemical Vapor Deposition or flame pyrolysis is an open-atmosphere, flame-based technique for depositing high-quality thin films and nanomaterials.
Hot filament CVD (HFCVD) – also known as catalytic CVD (Cat-CVD) or more commonly, initiated CVD, this process uses a hot filament to chemically decompose the source gases. The filament temperature and substrate temperature thus are independently controlled, allowing colder temperatures for better absorption rates at the substrate and higher temperatures necessary for decomposition of precursors to free radicals at the filament.
Hybrid physical-chemical vapor deposition (HPCVD) – This process involves both chemical decomposition of precursor gas and vaporization of a solid source.
Metalorganic chemical vapor deposition (MOCVD) – This CVD process is based on metalorganic precursors.
Rapid thermal CVD (RTCVD) – This CVD process uses heating lamps or other methods to rapidly heat the wafer substrate. Heating only the substrate rather than the gas or chamber walls helps reduce unwanted gas-phase reactions that can lead to particle formation.
Vapor-phase epitaxy (VPE)
Photo-initiated CVD (PICVD) – This process uses UV light to stimulate chemical reactions. It is similar to plasma processing, given that plasmas are strong emitters of UV radiation. Under certain conditions, PICVD can be operated at or near atmospheric pressure.
Laser chemical vapor deposition (LCVD) - This CVD process uses lasers to heat spots or lines on a substrate in semiconductor applications. In MEMS and in fiber production the lasers are used rapidly to break down the precursor gas—process temperature can exceed 2000 °C—to build up a solid structure in much the same way as laser sintering based 3-D printers build up solids from powders.
Uses
CVD is commonly used to deposit conformal films and augment substrate surfaces in ways that more traditional surface modification techniques are not capable of. CVD is extremely useful in the process of atomic layer deposition at depositing extremely thin layers of material. A variety of applications for such films exist. Gallium arsenide is used in some integrated circuits (ICs) and photovoltaic devices. Amorphous polysilicon is used in photovoltaic devices. Certain carbides and nitrides confer wear-resistance. Polymerization by CVD, perhaps the most versatile of all applications, allows for super-thin coatings which possess some very desirable qualities, such as lubricity, hydrophobicity and weather-resistance to name a few. The CVD of metal-organic frameworks, a class of crystalline nanoporous materials, has recently been demonstrated. Recently scaled up as an integrated cleanroom process depositing large-area substrates, the applications for these films are anticipated in gas sensing and low-κ dielectrics. CVD techniques are advantageous for membrane coatings as well, such as those in desalination or water treatment, as these coatings can be sufficiently uniform (conformal) and thin that they do not clog membrane pores.
Commercially important materials prepared by CVD
Polysilicon
Polycrystalline silicon is deposited from trichlorosilane (SiHCl3) or silane (SiH4), using the following reactions:
SiHCl3 → Si + Cl2 + HCl
SiH4 → Si + 2 H2
This reaction is usually performed in LPCVD systems, with either pure silane feedstock, or a solution of silane with 70–80% nitrogen. Temperatures between 600 and 650 °C and pressures between 25 and 150 Pa yield a growth rate between 10 and 20 nm per minute. An alternative process uses a hydrogen-based solution. The hydrogen reduces the growth rate, but the temperature is raised to 850 or even 1050 °C to compensate. Polysilicon may be grown directly with doping, if gases such as phosphine, arsine or diborane are added to the CVD chamber. Diborane increases the growth rate, but arsine and phosphine decrease it.
Silicon dioxide
Silicon dioxide (usually called simply "oxide" in the semiconductor industry) may be deposited by several different processes. Common source gases include silane and oxygen, dichlorosilane (SiCl2H2) and nitrous oxide (N2O), or tetraethylorthosilicate (TEOS; Si(OC2H5)4). The reactions are as follows:
SiH4 + O2 → SiO2 + 2 H2
SiCl2H2 + 2 N2O → SiO2 + 2 N2 + 2 HCl
Si(OC2H5)4 → SiO2 + byproducts
The choice of source gas depends on the thermal stability of the substrate; for instance, aluminium is sensitive to high temperature. Silane deposits between 300 and 500 °C, dichlorosilane at around 900 °C, and TEOS between 650 and 750 °C, resulting in a layer of low- temperature oxide (LTO). However, silane produces a lower-quality oxide than the other methods (lower dielectric strength, for instance), and it deposits nonconformally. Any of these reactions may be used in LPCVD, but the silane reaction is also done in APCVD. CVD oxide invariably has lower quality than thermal oxide, but thermal oxidation can only be used in the earliest stages of IC manufacturing.
Oxide may also be grown with impurities (alloying or "doping"). This may have two purposes. During further process steps that occur at high temperature, the impurities may diffuse from the oxide into adjacent layers (most notably silicon) and dope them. Oxides containing 5–15% impurities by mass are often used for this purpose. In addition, silicon dioxide alloyed with phosphorus pentoxide ("P-glass") can be used to smooth out uneven surfaces. P-glass softens and reflows at temperatures above 1000 °C. This process requires a phosphorus concentration of at least 6%, but concentrations above 8% can corrode aluminium. Phosphorus is deposited from phosphine gas and oxygen:
4 PH3 + 5 O2 → 2 P2O5 + 6 H2
Glasses containing both boron and phosphorus (borophosphosilicate glass, BPSG) undergo viscous flow at lower temperatures; around 850 °C is achievable with glasses containing around 5 weight % of both constituents, but stability in air can be difficult to achieve. Phosphorus oxide in high concentrations interacts with ambient moisture to produce phosphoric acid. Crystals of BPO4 can also precipitate from the flowing glass on cooling; these crystals are not readily etched in the standard reactive plasmas used to pattern oxides, and will result in circuit defects in integrated circuit manufacturing.
Besides these intentional impurities, CVD oxide may contain byproducts of the deposition. TEOS produces a relatively pure oxide, whereas silane introduces hydrogen impurities, and dichlorosilane introduces chlorine.
Lower temperature deposition of silicon dioxide and doped glasses from TEOS using ozone rather than oxygen has also been explored (350 to 500 °C). Ozone glasses have excellent conformality but tend to be hygroscopic – that is, they absorb water from the air due to the incorporation of silanol (Si-OH) in the glass. Infrared spectroscopy and mechanical strain as a function of temperature are valuable diagnostic tools for diagnosing such problems.
Silicon nitride
Silicon nitride is often used as an insulator and chemical barrier in manufacturing ICs. The following two reactions deposit silicon nitride from the gas phase:
3 SiH4 + 4 NH3 → Si3N4 + 12 H2
3 SiCl2H2 + 4 NH3 → Si3N4 + 6 HCl + 6 H2
Silicon nitride deposited by LPCVD contains up to 8% hydrogen. It also experiences strong tensile stress, which may crack films thicker than 200 nm. However, it has higher resistivity and dielectric strength than most insulators commonly available in microfabrication (1016 Ω·cm and 10 MV/cm, respectively).
Another two reactions may be used in plasma to deposit SiNH:
2 SiH4 + N2 → 2 SiNH + 3 H2
SiH4 + NH3 → SiNH + 3 H2
These films have much less tensile stress, but worse electrical properties (resistivity 106 to 1015 Ω·cm, and dielectric strength 1 to 5 MV/cm).
Metals
Tungsten CVD, used for forming conductive contacts, vias, and plugs on a semiconductor device, is achieved from tungsten hexafluoride (WF6), which may be deposited in two ways:
WF6 → W + 3 F2
WF6 + 3 H2 → W + 6 HF
Other metals, notably aluminium and copper, can be deposited by CVD. , commercially cost-effective CVD for copper did not exist, although volatile sources exist, such as Cu(hfac)2. Copper is typically deposited by electroplating. Aluminium can be deposited from triisobutylaluminium (TIBAL) and related organoaluminium compounds.
CVD for molybdenum, tantalum, titanium, nickel is widely used. These metals can form useful silicides when deposited onto silicon. Mo, Ta and Ti are deposited by LPCVD, from their pentachlorides. Nickel, molybdenum, and tungsten can be deposited at low temperatures from their carbonyl precursors. In general, for an arbitrary metal M, the chloride deposition reaction is as follows:
2 MCl5 + 5 H2 → 2 M + 10 HCl
whereas the carbonyl decomposition reaction can happen spontaneously under thermal treatment or acoustic cavitation and is as follows:
M(CO)n → M + n CO
the decomposition of metal carbonyls is often violently precipitated by moisture or air, where oxygen reacts with the metal precursor to form metal or metal oxide along with carbon dioxide.
Niobium(V) oxide layers can be produced by the thermal decomposition of niobium(V) ethoxide with the loss of diethyl ether according to the equation:
2 Nb(OC2H5)5 → Nb2O5 + 5 C2H5OC2H5
Graphene
Many variations of CVD can be utilized to synthesize graphene. Although many advancements have been made, the processes listed below are not commercially viable yet.
Carbon source
The most popular carbon source that is used to produce graphene is methane gas. One of the less popular choices is petroleum asphalt, notable for being inexpensive but more difficult to work with.
Although methane is the most popular carbon source, hydrogen is required during the preparation process to promote carbon deposition on the substrate. If the flow ratio of methane and hydrogen are not appropriate, it will cause undesirable results. During the growth of graphene, the role of methane is to provide a carbon source, the role of hydrogen is to provide H atoms to corrode amorphous C, and improve the quality of graphene. But excessive H atoms can also corrode graphene. As a result, the integrity of the crystal lattice is destroyed, and the quality of graphene is deteriorated. Therefore, by optimizing the flow rate of methane and hydrogen gases in the growth process, the quality of graphene can be improved.
Use of catalyst
The use of catalyst is viable in changing the physical process of graphene production. Notable examples include iron nanoparticles, nickel foam, and gallium vapor. These catalysts can either be used in situ during graphene buildup, or situated at some distance away at the deposition area. Some catalysts require another step to remove them from the sample material.
The direct growth of high-quality, large single-crystalline domains of graphene on a dielectric substrate is of vital importance for applications in electronics and optoelectronics. Combining the advantages of both catalytic CVD and the ultra-flat dielectric substrate, gaseous catalyst-assisted CVD paves the way for synthesizing high-quality graphene for device applications while avoiding the transfer process.
Physical conditions
Physical conditions such as surrounding pressure, temperature, carrier gas, and chamber material play a big role in production of graphene.
Most systems use LPCVD with pressures ranging from 1 to 1500 Pa. However, some still use APCVD. Low pressures are used more commonly as they help prevent unwanted reactions and produce more uniform thickness of deposition on the substrate.
On the other hand, temperatures used range from 800 to 1050 °C. High temperatures translate to an increase of the rate of reaction. Caution has to be exercised as high temperatures do pose higher danger levels in addition to greater energy costs.
Carrier gas
Hydrogen gas and inert gases such as argon are flowed into the system. These gases act as a carrier, enhancing surface reaction and improving reaction rate, thereby increasing deposition of graphene onto the substrate.
Chamber material
Standard quartz tubing and chambers are used in CVD of graphene. Quartz is chosen because it has a very high melting point and is chemically inert. In other words, quartz does not interfere with any physical or chemical reactions regardless of the conditions.
Methods of analysis of results
Raman spectroscopy, X-ray spectroscopy, transmission electron microscopy (TEM), and scanning electron microscopy (SEM) are used to examine and characterize the graphene samples.
Raman spectroscopy is used to characterize and identify the graphene particles; X-ray spectroscopy is used to characterize chemical states; TEM is used to provide fine details regarding the internal composition of graphene; SEM is used to examine the surface and topography.
Sometimes, atomic force microscopy (AFM) is used to measure local properties such as friction and magnetism.
Cold wall CVD technique can be used to study the underlying surface science involved in graphene nucleation and growth as it allows unprecedented control of process parameters like gas flow rates, temperature and pressure as demonstrated in a recent study. The study was carried out in a home-built vertical cold wall system utilizing resistive heating by passing direct current through the substrate. It provided conclusive insight into a typical surface-mediated nucleation and growth mechanism involved in two-dimensional materials grown using catalytic CVD under conditions sought out in the semiconductor industry.
Graphene nanoribbon
In spite of graphene's exciting electronic and thermal properties, it is unsuitable as a transistor for future digital devices, due to the absence of a bandgap between the conduction and valence bands. This makes it impossible to switch between on and off states with respect to electron flow. Scaling things down, graphene nanoribbons of less than 10 nm in width do exhibit electronic bandgaps and are therefore potential candidates for digital devices. Precise control over their dimensions, and hence electronic properties, however, represents a challenging goal, and the ribbons typically possess rough edges that are detrimental to their performance.
Diamond
CVD can be used to produce a synthetic diamond by creating the circumstances necessary for carbon atoms in a gas to settle on a substrate in crystalline form. CVD of diamonds has received much attention in the materials sciences because it allows many new applications that had previously been considered too expensive. CVD diamond growth typically occurs under low pressure (1–27 kPa; 0.145–3.926 psi; 7.5–203 Torr) and involves feeding varying amounts of gases into a chamber, energizing them and providing conditions for diamond growth on the substrate. The gases always include a carbon source, and typically include hydrogen as well, though the amounts used vary greatly depending on the type of diamond being grown. Energy sources include hot filament, microwave power, and arc discharges, among others. The energy source is intended to generate a plasma in which the gases are broken down and more complex chemistries occur. The actual chemical process for diamond growth is still under study and is complicated by the very wide variety of diamond growth processes used.
Using CVD, films of diamond can be grown over large areas of substrate with control over the properties of the diamond produced. In the past, when high pressure high temperature (HPHT) techniques were used to produce a diamond, the result was typically very small free-standing diamonds of varying sizes. With CVD diamond, growth areas of greater than fifteen centimeters (six inches) in diameter have been achieved, and much larger areas are likely to be successfully coated with diamond in the future. Improving this process is key to enabling several important applications.
The growth of diamond directly on a substrate allows the addition of many of diamond's important qualities to other materials. Since diamond has the highest thermal conductivity of any bulk material, layering diamond onto high heat-producing electronics (such as optics and transistors) allows the diamond to be used as a heat sink. Diamond films are being grown on valve rings, cutting tools, and other objects that benefit from diamond's hardness and exceedingly low wear rate. In each case the diamond growth must be carefully done to achieve the necessary adhesion onto the substrate. Diamond's very high scratch resistance and thermal conductivity, combined with a lower coefficient of thermal expansion than Pyrex glass, a coefficient of friction close to that of Teflon (polytetrafluoroethylene) and strong lipophilicity would make it a nearly ideal non-stick coating for cookware if large substrate areas could be coated economically.
CVD growth allows one to control the properties of the diamond produced. In the area of diamond growth, the word "diamond" is used as a description of any material primarily made up of sp3-bonded carbon, and there are many different types of diamond included in this. By regulating the processing parameters—especially the gases introduced, but also including the pressure the system is operated under, the temperature of the diamond, and the method of generating plasma—many different materials that can be considered diamond can be made. Single-crystal diamond can be made containing various dopants. Polycrystalline diamond consisting of grain sizes from several nanometers to several micrometers can be grown. Some polycrystalline diamond grains are surrounded by thin, non-diamond carbon, while others are not. These different factors affect the diamond's hardness, smoothness, conductivity, optical properties and more.
Chalcogenides
Commercially, mercury cadmium telluride is of continuing interest for detection of infrared radiation. Consisting of an alloy of CdTe and HgTe, this material can be prepared from the dimethyl derivatives of the respective elements.
See also
Apollo Diamond
Bubbler cylinder
Carbonyl metallurgy
Electrostatic spray assisted vapour deposition
Element Six
Ion plating
Metalorganic vapour phase epitaxy
Virtual metrology
Lisa McElwee-White
List of metal-organic chemical vapour deposition precursors
List of synthetic diamond manufacturers
References
Further reading
Okada K. (2007). "Plasma-enhanced chemical vapor deposition of nanocrystalline diamond" Sci. Technol. Adv. Mater. 8, 624 free-download review
Liu T., Raabe D. and Zaefferer S. (2008). "A 3D tomographic EBSD analysis of a CVD diamond thin film" Sci. Technol. Adv. Mater. 9 (2008) 035013 free-download
Wild, Christoph (2008). "CVD Diamond Properties and Useful Formula" CVD Diamond Booklet PDF free-download
Hess, Dennis W. (1988). Chemical vapor deposition of dielectric and metal films . Free-download from Electronic Materials and Processing: Proceedings of the First Electronic Materials and Processing Congress held in conjunction with the 1988 World Materials Congress Chicago, Illinois, USA, 24–30 September 1988, Edited by Prabjit Singh (Sponsored by the Electronic Materials and Processing Division of ASM International).
Chemical processes
Coatings
Glass coating and surface modification
Industrial processes
Plasma processing
Semiconductor device fabrication
Synthetic diamond
Thin film deposition
Vacuum
Forming processes | Chemical vapor deposition | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 5,125 | [
"Glass chemistry",
"Thin film deposition",
"Microtechnology",
"Coatings",
"Thin films",
"Vacuum",
"Chemical processes",
"Semiconductor device fabrication",
"nan",
"Chemical process engineering",
"Chemical vapor deposition",
"Planes (geometry)",
"Solid state engineering",
"Glass coating and... |
6,113 | https://en.wikipedia.org/wiki/Chain%20rule | In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions and in terms of the derivatives of and . More precisely, if is the function such that for every , then the chain rule is, in Lagrange's notation,
or, equivalently,
The chain rule may also be expressed in Leibniz's notation. If a variable depends on the variable , which itself depends on the variable (that is, and are dependent variables), then depends on as well, via the intermediate variable . In this case, the chain rule is expressed as
and
for indicating at which points the derivatives have to be evaluated.
In integration, the counterpart to the chain rule is the substitution rule.
Intuitive explanation
Intuitively, the chain rule states that knowing the instantaneous rate of change of relative to and that of relative to allows one to calculate the instantaneous rate of change of relative to as the product of the two rates of change.
As put by George F. Simmons: "If a car travels twice as fast as a bicycle and the bicycle is four times as fast as a walking man, then the car travels 2 × 4 = 8 times as fast as the man."
The relationship between this example and the chain rule is as follows. Let , and be the (variable) positions of the car, the bicycle, and the walking man, respectively. The rate of change of relative positions of the car and the bicycle is Similarly, So, the rate of change of the relative positions of the car and the walking man is
The rate of change of positions is the ratio of the speeds, and the speed is the derivative of the position with respect to the time; that is,
or, equivalently,
which is also an application of the chain rule.
History
The chain rule seems to have first been used by Gottfried Wilhelm Leibniz. He used it to calculate the derivative of as the composite of the square root function and the function . He first mentioned it in a 1676 memoir (with a sign error in the calculation). The common notation of the chain rule is due to Leibniz. Guillaume de l'Hôpital used the chain rule implicitly in his Analyse des infiniment petits. The chain rule does not appear in any of Leonhard Euler's analysis books, even though they were written over a hundred years after Leibniz's discovery.. It is believed that the first "modern" version of the chain rule appears in Lagrange's 1797 Théorie des fonctions analytiques; it also appears in Cauchy's 1823 Résumé des Leçons données a L’École Royale Polytechnique sur Le Calcul Infinitesimal.
Statement
The simplest form of the chain rule is for real-valued functions of one real variable. It states that if is a function that is differentiable at a point (i.e. the derivative exists) and is a function that is differentiable at , then the composite function is differentiable at , and the derivative is
The rule is sometimes abbreviated as
If and , then this abbreviated form is written in Leibniz notation as:
The points where the derivatives are evaluated may also be stated explicitly:
Carrying the same reasoning further, given functions with the composite function , if each function is differentiable at its immediate input, then the composite function is also differentiable by the repeated application of Chain Rule, where the derivative is (in Leibniz's notation):
Applications
Composites of more than two functions
The chain rule can be applied to composites of more than two functions. To take the derivative of a composite of more than two functions, notice that the composite of , , and (in that order) is the composite of with . The chain rule states that to compute the derivative of , it is sufficient to compute the derivative of and the derivative of . The derivative of can be calculated directly, and the derivative of can be calculated by applying the chain rule again.
For concreteness, consider the function
This can be decomposed as the composite of three functions:
So that .
Their derivatives are:
The chain rule states that the derivative of their composite at the point is:
In Leibniz's notation, this is:
or for short,
The derivative function is therefore:
Another way of computing this derivative is to view the composite function as the composite of and h. Applying the chain rule in this manner would yield:
This is the same as what was computed above. This should be expected because .
Sometimes, it is necessary to differentiate an arbitrarily long composition of the form . In this case, define
where and when . Then the chain rule takes the form
or, in the Lagrange notation,
Quotient rule
The chain rule can be used to derive some well-known differentiation rules. For example, the quotient rule is a consequence of the chain rule and the product rule. To see this, write the function as the product . First apply the product rule:
To compute the derivative of , notice that it is the composite of with the reciprocal function, that is, the function that sends to . The derivative of the reciprocal function is . By applying the chain rule, the last expression becomes:
which is the usual formula for the quotient rule.
Derivatives of inverse functions
Suppose that has an inverse function. Call its inverse function so that we have . There is a formula for the derivative of in terms of the derivative of . To see this, note that and satisfy the formula
And because the functions and are equal, their derivatives must be equal. The derivative of is the constant function with value 1, and the derivative of is determined by the chain rule. Therefore, we have that:
To express as a function of an independent variable , we substitute for wherever it appears. Then we can solve for .
For example, consider the function . It has an inverse . Because , the above formula says that
This formula is true whenever is differentiable and its inverse is also differentiable. This formula can fail when one of these conditions is not true. For example, consider . Its inverse is , which is not differentiable at zero. If we attempt to use the above formula to compute the derivative of at zero, then we must evaluate . Since and , we must evaluate 1/0, which is undefined. Therefore, the formula fails in this case. This is not surprising because is not differentiable at zero.
Back propagation
The chain rule forms the basis of the back propagation algorithm, which is used in gradient descent of neural networks in deep learning (artificial intelligence).
Higher derivatives
Faà di Bruno's formula generalizes the chain rule to higher derivatives. Assuming that and , then the first few derivatives are:
Proofs
First proof
One proof of the chain rule begins by defining the derivative of the composite function , where we take the limit of the difference quotient for as approaches :
Assume for the moment that does not equal for any near . Then the previous expression is equal to the product of two factors:
If oscillates near , then it might happen that no matter how close one gets to , there is always an even closer such that . For example, this happens near for the continuous function defined by for and otherwise. Whenever this happens, the above expression is undefined because it involves division by zero. To work around this, introduce a function as follows:
We will show that the difference quotient for is always equal to:
Whenever is not equal to , this is clear because the factors of cancel. When equals , then the difference quotient for is zero because equals , and the above product is zero because it equals times zero. So the above product is always equal to the difference quotient, and to show that the derivative of at exists and to determine its value, we need only show that the limit as goes to of the above product exists and determine its value.
To do this, recall that the limit of a product exists if the limits of its factors exist. When this happens, the limit of the product of these two factors will equal the product of the limits of the factors. The two factors are and . The latter is the difference quotient for at , and because is differentiable at by assumption, its limit as tends to exists and equals .
As for , notice that is defined wherever is. Furthermore, is differentiable at by assumption, so is continuous at , by definition of the derivative. The function is continuous at because it is differentiable at , and therefore is continuous at . So its limit as goes to exists and equals , which is .
This shows that the limits of both factors exist and that they equal and , respectively. Therefore, the derivative of at a exists and equals .
Second proof
Another way of proving the chain rule is to measure the error in the linear approximation determined by the derivative. This proof has the advantage that it generalizes to several variables. It relies on the following equivalent definition of differentiability at a point: A function g is differentiable at a if there exists a real number g′(a) and a function ε(h) that tends to zero as h tends to zero, and furthermore
Here the left-hand side represents the true difference between the value of g at a and at , whereas the right-hand side represents the approximation determined by the derivative plus an error term.
In the situation of the chain rule, such a function ε exists because g is assumed to be differentiable at a. Again by assumption, a similar function also exists for f at g(a). Calling this function η, we have
The above definition imposes no constraints on η(0), even though it is assumed that η(k) tends to zero as k tends to zero. If we set , then η is continuous at 0.
Proving the theorem requires studying the difference as h tends to zero. The first step is to substitute for using the definition of differentiability of g at a:
The next step is to use the definition of differentiability of f at g(a). This requires a term of the form for some k. In the above equation, the correct k varies with h. Set and the right hand side becomes . Applying the definition of the derivative gives:
To study the behavior of this expression as h tends to zero, expand kh. After regrouping the terms, the right-hand side becomes:
Because ε(h) and η(kh) tend to zero as h tends to zero, the first two bracketed terms tend to zero as h tends to zero. Applying the same theorem on products of limits as in the first proof, the third bracketed term also tends zero. Because the above expression is equal to the difference , by the definition of the derivative is differentiable at a and its derivative is
The role of Q in the first proof is played by η in this proof. They are related by the equation:
The need to define Q at g(a) is analogous to the need to define η at zero.
Third proof
Constantin Carathéodory's alternative definition of the differentiability of a function can be used to give an elegant proof of the chain rule.
Under this definition, a function is differentiable at a point if and only if there is a function , continuous at and such that . There is at most one such function, and if is differentiable at then .
Given the assumptions of the chain rule and the fact that differentiable functions and compositions of continuous functions are continuous, we have that there exist functions , continuous at , and , continuous at , and such that,
and
Therefore,
but the function given by is continuous at , and we get, for this
A similar approach works for continuously differentiable (vector-)functions of many variables. This method of factoring also allows a unified approach to stronger forms of differentiability, when the derivative is required to be Lipschitz continuous, Hölder continuous, etc. Differentiation itself can be viewed as the polynomial remainder theorem (the little Bézout theorem, or factor theorem), generalized to an appropriate class of functions.
Proof via infinitesimals
If and then choosing infinitesimal we compute the corresponding and then the corresponding , so that
and applying the standard part we obtain
which is the chain rule.
Multivariable case
The full generalization of the chain rule to multi-variable functions (such as ) is rather technical. However, it is simpler to write in the case of functions of the form
where , and for each
As this case occurs often in the study of functions of a single variable, it is worth describing it separately.
Case of scalar-valued functions with multiple inputs
Let , and for each
To write the chain rule for the composition of functions
one needs the partial derivatives of with respect to its arguments. The usual notations for partial derivatives involve names for the arguments of the function. As these arguments are not named in the above formula, it is simpler and clearer to use D-Notation, and to denote by
the partial derivative of with respect to its th argument, and by
the value of this derivative at .
With this notation, the chain rule is
Example: arithmetic operations
If the function is addition, that is, if
then and . Thus, the chain rule gives
For multiplication
the partials are and . Thus,
The case of exponentiation
is slightly more complicated, as
and, as
It follows that
General rule: Vector-valued functions with multiple inputs
The simplest way for writing the chain rule in the general case is to use the total derivative, which is a linear transformation that captures all directional derivatives in a single formula. Consider differentiable functions and , and a point in . Let denote the total derivative of at and denote the total derivative of at . These two derivatives are linear transformations and , respectively, so they can be composed. The chain rule for total derivatives is that their composite is the total derivative of at :
or for short,
The higher-dimensional chain rule can be proved using a technique similar to the second proof given above.
Because the total derivative is a linear transformation, the functions appearing in the formula can be rewritten as matrices. The matrix corresponding to a total derivative is called a Jacobian matrix, and the composite of two derivatives corresponds to the product of their Jacobian matrices. From this perspective the chain rule therefore says:
or for short,
That is, the Jacobian of a composite function is the product of the Jacobians of the composed functions (evaluated at the appropriate points).
The higher-dimensional chain rule is a generalization of the one-dimensional chain rule. If , , and are 1, so that and , then the Jacobian matrices of and are . Specifically, they are:
The Jacobian of is the product of these matrices, so it is , as expected from the one-dimensional chain rule. In the language of linear transformations, is the function which scales a vector by a factor of and is the function which scales a vector by a factor of . The chain rule says that the composite of these two linear transformations is the linear transformation , and therefore it is the function that scales a vector by .
Another way of writing the chain rule is used when f and g are expressed in terms of their components as and . In this case, the above rule for Jacobian matrices is usually written as:
The chain rule for total derivatives implies a chain rule for partial derivatives. Recall that when the total derivative exists, the partial derivative in the -th coordinate direction is found by multiplying the Jacobian matrix by the -th basis vector. By doing this to the formula above, we find:
Since the entries of the Jacobian matrix are partial derivatives, we may simplify the above formula to get:
More conceptually, this rule expresses the fact that a change in the direction may change all of through , and any of these changes may affect .
In the special case where , so that is a real-valued function, then this formula simplifies even further:
This can be rewritten as a dot product. Recalling that , the partial derivative is also a vector, and the chain rule says that:
Example
Given where and , determine the value of and using the chain rule.
and
Higher derivatives of multivariable functions
Faà di Bruno's formula for higher-order derivatives of single-variable functions generalizes to the multivariable case. If is a function of as above, then the second derivative of is:
Further generalizations
All extensions of calculus have a chain rule. In most of these, the formula remains the same, though the meaning of that formula may be vastly different.
One generalization is to manifolds. In this situation, the chain rule represents the fact that the derivative of is the composite of the derivative of and the derivative of . This theorem is an immediate consequence of the higher dimensional chain rule given above, and it has exactly the same formula.
The chain rule is also valid for Fréchet derivatives in Banach spaces. The same formula holds as before. This case and the previous one admit a simultaneous generalization to Banach manifolds.
In differential algebra, the derivative is interpreted as a morphism of modules of Kähler differentials. A ring homomorphism of commutative rings determines a morphism of Kähler differentials which sends an element to , the exterior differential of . The formula holds in this context as well.
The common feature of these examples is that they are expressions of the idea that the derivative is part of a functor. A functor is an operation on spaces and functions between them. It associates to each space a new space and to each function between two spaces a new function between the corresponding new spaces. In each of the above cases, the functor sends each space to its tangent bundle and it sends each function to its derivative. For example, in the manifold case, the derivative sends a -manifold to a -manifold (its tangent bundle) and a -function to its total derivative. There is one requirement for this to be a functor, namely that the derivative of a composite must be the composite of the derivatives. This is exactly the formula .
There are also chain rules in stochastic calculus. One of these, Itō's lemma, expresses the composite of an Itō process (or more generally a semimartingale) dXt with a twice-differentiable function f. In Itō's lemma, the derivative of the composite function depends not only on dXt and the derivative of f but also on the second derivative of f. The dependence on the second derivative is a consequence of the non-zero quadratic variation of the stochastic process, which broadly speaking means that the process can move up and down in a very rough way. This variant of the chain rule is not an example of a functor because the two functions being composed are of different types.
See also
− a computational method that makes heavy use of the chain rule to compute exact numerical derivatives.
References
External links
Articles containing proofs
Differentiation rules
Theorems in analysis
Theorems in calculus | Chain rule | [
"Mathematics"
] | 3,867 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Theorems in calculus",
"Calculus",
"Mathematical problems",
"Articles containing proofs",
"Mathematical theorems"
] |
6,115 | https://en.wikipedia.org/wiki/P%20versus%20NP%20problem | The P versus NP problem is a major unsolved problem in theoretical computer science. Informally, it asks whether every problem whose solution can be quickly verified can also be quickly solved.
Here, "quickly" means an algorithm that solves the task and runs in polynomial time (as opposed to, say, exponential time) exists, meaning the task completion time is bounded above by a polynomial function on the size of the input to the algorithm. The general class of questions that some algorithm can answer in polynomial time is "P" or "class P". For some questions, there is no known way to find an answer quickly, but if provided with an answer, it can be verified quickly. The class of questions where an answer can be verified in polynomial time is "NP", standing for "nondeterministic polynomial time".
An answer to the P versus NP question would determine whether problems that can be verified in polynomial time can also be solved in polynomial time. If P ≠ NP, which is widely believed, it would mean that there are problems in NP that are harder to compute than to verify: they could not be solved in polynomial time, but the answer could be verified in polynomial time.
The problem has been called the most important open problem in computer science. Aside from being an important problem in computational theory, a proof either way would have profound implications for mathematics, cryptography, algorithm research, artificial intelligence, game theory, multimedia processing, philosophy, economics and many other fields.
It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute, each of which carries a US$1,000,000 prize for the first correct solution.
Example
Consider the following yes/no problem: given an incomplete Sudoku grid of size , is there at least one legal solution where every row, column, and square contains the integers 1 through ? It is straightforward to verify "yes" instances of this generalized Sudoku problem given a candidate solution. However, it is not known whether there is a polynomial-time algorithm that can correctly answer "yes" or "no" to all instances of this problem. Therefore, generalized Sudoku is in NP (quickly verifiable), but may or may not be in P (quickly solvable). (It is necessary to consider a generalized version of Sudoku, as any fixed size Sudoku has only a finite number of possible grids. In this case the problem is in P, as the answer can be found by table lookup.)
History
The precise statement of the P versus NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" (and independently by Leonid Levin in 1973).
Although the P versus NP problem was formally defined in 1971, there were previous inklings of the problems involved, the difficulty of proof, and the potential consequences. In 1955, mathematician John Nash wrote a letter to the NSA, speculating that cracking a sufficiently complex code would require time exponential in the length of the key. If proved (and Nash was suitably skeptical), this would imply what is now called P ≠ NP, since a proposed key can be verified in polynomial time. Another mention of the underlying problem occurred in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether theorem-proving (now known to be co-NP-complete) could be solved in quadratic or linear time, and pointed out one of the most important consequences—that if so, then the discovery of mathematical proofs could be automated.
Context
The relation between the complexity classes P and NP is studied in computational complexity theory, the part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem).
In such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer is deterministic (given the computer's present state and any inputs, there is only one possible action that the computer might take) and sequential (it performs actions one after the other).
In this theory, the class P consists of all decision problems (defined below) solvable on a deterministic sequential machine in a duration polynomial in the size of the input; the class NP consists of all decision problems whose positive solutions are verifiable in polynomial time given the right information, or equivalently, whose solution can be found in polynomial time on a non-deterministic machine. Clearly, P ⊆ NP. Arguably, the biggest open question in theoretical computer science concerns the relationship between those two classes:
Is P equal to NP?
Since 2002, William Gasarch has conducted three polls of researchers concerning this and related questions. Confidence that P ≠ NP has been increasing – in 2019, 88% believed P ≠ NP, as opposed to 83% in 2012 and 61% in 2002. When restricted to experts, the 2019 answers became 99% believed P ≠ NP. These polls do not imply whether P = NP, Gasarch himself stated: "This does not bring us any closer to solving P=?NP or to knowing when it will be solved, but it attempts to be an objective report on the subjective opinion of this era."
NP-completeness
To attack the P = NP question, the concept of NP-completeness is very useful. NP-complete problems are problems that any other NP problem is reducible to in polynomial time and whose solution is still verifiable in polynomial time. That is, any NP problem can be transformed into any NP-complete problem. Informally, an NP-complete problem is an NP problem that is at least as "tough" as any other problem in NP.
NP-hard problems are those at least as hard as NP problems; i.e., all NP problems can be reduced (in polynomial time) to them. NP-hard problems need not be in NP; i.e., they need not have solutions verifiable in polynomial time.
For instance, the Boolean satisfiability problem is NP-complete by the Cook–Levin theorem, so any instance of any problem in NP can be transformed mechanically into a Boolean satisfiability problem in polynomial time. The Boolean satisfiability problem is one of many NP-complete problems. If any NP-complete problem is in P, then it would follow that P = NP. However, many important problems are NP-complete, and no fast algorithm for any of them is known.
From the definition alone it is unintuitive that NP-complete problems exist; however, a trivial NP-complete problem can be formulated as follows: given a Turing machine M guaranteed to halt in polynomial time, does a polynomial-size input that M will accept exist? It is in NP because (given an input) it is simple to check whether M accepts the input by simulating M; it is NP-complete because the verifier for any particular instance of a problem in NP can be encoded as a polynomial-time machine M that takes the solution to be verified as input. Then the question of whether the instance is a yes or no instance is determined by whether a valid input exists.
The first natural problem proven to be NP-complete was the Boolean satisfiability problem, also known as SAT. As noted above, this is the Cook–Levin theorem; its proof that satisfiability is NP-complete contains technical details about Turing machines as they relate to the definition of NP. However, after this problem was proved to be NP-complete, proof by reduction provided a simpler way to show that many other problems are also NP-complete, including the game Sudoku discussed earlier. In this case, the proof shows that a solution of Sudoku in polynomial time could also be used to complete Latin squares in polynomial time. This in turn gives a solution to the problem of partitioning tri-partite graphs into triangles, which could then be used to find solutions for the special case of SAT known as 3-SAT, which then provides a solution for general Boolean satisfiability. So a polynomial-time solution to Sudoku leads, by a series of mechanical transformations, to a polynomial time solution of satisfiability, which in turn can be used to solve any other NP-problem in polynomial time. Using transformations like this, a vast class of seemingly unrelated problems are all reducible to one another, and are in a sense "the same problem".
Harder problems
Although it is unknown whether P = NP, problems outside of P are known. Just as the class P is defined in terms of polynomial running time, the class EXPTIME is the set of all decision problems that have exponential running time. In other words, any problem in EXPTIME is solvable by a deterministic Turing machine in O(2p(n)) time, where p(n) is a polynomial function of n. A decision problem is EXPTIME-complete if it is in EXPTIME, and every problem in EXPTIME has a polynomial-time many-one reduction to it. A number of problems are known to be EXPTIME-complete. Because it can be shown that P ≠ EXPTIME, these problems are outside P, and so require more than polynomial time. In fact, by the time hierarchy theorem, they cannot be solved in significantly less than exponential time. Examples include finding a perfect strategy for chess positions on an N × N board and similar problems for other board games.
The problem of deciding the truth of a statement in Presburger arithmetic requires even more time. Fischer and Rabin proved in 1974 that every algorithm that decides the truth of Presburger statements of length n has a runtime of at least for some constant c. Hence, the problem is known to need more than exponential run time. Even more difficult are the undecidable problems, such as the halting problem. They cannot be completely solved by any algorithm, in the sense that for any particular algorithm there is at least one input for which that algorithm will not produce the right answer; it will either produce the wrong answer, finish without giving a conclusive answer, or otherwise run forever without producing any answer at all.
It is also possible to consider questions other than decision problems. One such class, consisting of counting problems, is called #P: whereas an NP problem asks "Are there any solutions?", the corresponding #P problem asks "How many solutions are there?". Clearly, a #P problem must be at least as hard as the corresponding NP problem, since a count of solutions immediately tells if at least one solution exists, if the count is greater than zero. Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For these problems, it is very easy to tell whether solutions exist, but thought to be very hard to tell how many. Many of these problems are #P-complete, and hence among the hardest problems in #P, since a polynomial time solution to any of them would allow a polynomial time solution to all other #P problems.
Problems in NP not known to be in P or NP-complete
In 1975, Richard E. Ladner showed that if P ≠ NP, then there exist problems in NP that are neither in P nor NP-complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem, and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete.
The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai, runs in quasi-polynomial time.
The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less than k. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UP). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP = co-NP). The most efficient known algorithm for integer factorization is the general number field sieve, which takes expected time
to factor an n-bit integer. The best known quantum algorithm for this problem, Shor's algorithm, runs in polynomial time, although this does not indicate where the problem lies with respect to non-quantum complexity classes.
Does P mean "easy"?
All of the above discussion has assumed that P means "easy" and "not in P" means "difficult", an assumption known as Cobham's thesis. It is a common assumption in complexity theory; but there are caveats.
First, it can be false in practice. A theoretical polynomial algorithm may have extremely large constant factors or exponents, rendering it impractical. For example, the problem of deciding whether a graph G contains H as a minor, where H is fixed, can be solved in a running time of O(n2), where n is the number of vertices in G. However, the big O notation hides a constant that depends superexponentially on H. The constant is greater than (using Knuth's up-arrow notation), and where h is the number of vertices in H.
On the other hand, even if a problem is shown to be NP-complete, and even if P ≠ NP, there may still be effective approaches to the problem in practice. There are algorithms for many NP-complete problems, such as the knapsack problem, the traveling salesman problem, and the Boolean satisfiability problem, that can solve to optimality many real-world instances in reasonable time. The empirical average-case complexity (time vs. problem size) of such algorithms can be surprisingly low. An example is the simplex algorithm in linear programming, which works surprisingly well in practice; despite having exponential worst-case time complexity, it runs on par with the best known polynomial-time algorithms.
Finally, there are types of computations which do not conform to the Turing machine model on which P and NP are defined, such as quantum computation and randomized algorithms.
Reasons to believe P ≠ NP or P = NP
Cook provides a restatement of the problem in The P Versus NP Problem as "Does P = NP?" According to polls, most computer scientists believe that P ≠ NP. A key reason for this belief is that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3,000 important known NP-complete problems (see List of NP-complete problems). These algorithms were sought long before the concept of NP-completeness was even defined (Karp's 21 NP-complete problems, among the first found, were all well-known existing problems at the time they were shown to be NP-complete). Furthermore, the result P = NP would imply many other startling results that are currently believed to be false, such as NP = co-NP and P = PH.
It is also intuitively argued that the existence of problems that are hard to solve but whose solutions are easy to verify matches real-world experience.
On the other hand, some researchers believe that it is overconfident to believe P ≠ NP and that researchers should also explore proofs of P = NP. For example, in 2002 these statements were made:
DLIN vs NLIN
When one substitutes "linear time on a multitape Turing machine" for "polynomial time" in the definitions of P and NP, one obtains the classes DLIN and NLIN.
It is known that DLIN ≠ NLIN.
Consequences of solution
One of the reasons the problem attracts so much attention is the consequences of the possible answers. Either direction of resolution would advance theory enormously, and perhaps have huge practical consequences as well.
P = NP
A proof that P = NP could have stunning practical consequences if the proof leads to efficient methods for solving some of the important problems in NP. The potential consequences, both positive and negative, arise since various NP-complete problems are fundamental in many fields.
It is also very possible that a proof would not lead to practical algorithms for NP-complete problems. The formulation of the problem does not require that the bounding polynomial be small or even specifically known. A non-constructive proof might show a solution exists without specifying either an algorithm to obtain it or a specific bound. Even if the proof is constructive, showing an explicit bounding polynomial and algorithmic details, if the polynomial is not very low-order the algorithm might not be sufficiently efficient in practice. In this case the initial proof would be mainly of interest to theoreticians, but the knowledge that polynomial time solutions are possible would surely spur research into better (and possibly practical) methods to achieve them.
A solution showing P = NP could upend the field of cryptography, which relies on certain problems being difficult. A constructive and efficient solution to an NP-complete problem such as 3-SAT would break most existing cryptosystems including:
Existing implementations of public-key cryptography, a foundation for many modern security applications such as secure financial transactions over the Internet.
Symmetric ciphers such as AES or 3DES, used for the encryption of communications data.
Cryptographic hashing, which underlies blockchain cryptocurrencies such as Bitcoin, and is used to authenticate software updates. For these applications, finding a pre-image that hashes to a given value must be difficult, ideally taking exponential time. If P = NP, then this can take polynomial time, through reduction to SAT.
These would need modification or replacement with information-theoretically secure solutions that do not assume P ≠ NP.
There are also enormous benefits that would follow from rendering tractable many currently mathematically intractable problems. For instance, many problems in operations research are NP-complete, such as types of integer programming and the travelling salesman problem. Efficient solutions to these problems would have enormous implications for logistics. Many other important problems, such as some problems in protein structure prediction, are also NP-complete; making these problems efficiently solvable could considerably advance life sciences and biotechnology.
These changes could be insignificant compared to the revolution that efficiently solving NP-complete problems would cause in mathematics itself. Gödel, in his early thoughts on computational complexity, noted that a mechanical method that could solve any problem would revolutionize mathematics:
Similarly, Stephen Cook (assuming not only a proof, but a practically efficient algorithm) says:
Research mathematicians spend their careers trying to prove theorems, and some proofs have taken decades or even centuries to find after problems have been stated—for instance, Fermat's Last Theorem took over three centuries to prove. A method guaranteed to find a proof if a "reasonable" size proof exists, would essentially end this struggle.
Donald Knuth has stated that he has come to believe that P = NP, but is reserved about the impact of a possible proof:
P ≠ NP
A proof of P ≠ NP would lack the practical computational benefits of a proof that P = NP, but would represent a great advance in computational complexity theory and guide future research. It would demonstrate that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P ≠ NP, much of this focusing of research has already taken place.
P ≠ NP still leaves open the average-case complexity of hard problems in NP. For example, it is possible that SAT requires exponential time in the worst case, but that almost all randomly selected instances of it are efficiently solvable. Russell Impagliazzo has described five hypothetical "worlds" that could result from different possible resolutions to the average-case complexity question. These range from "Algorithmica", where P = NP and problems like SAT can be solved efficiently in all instances, to "Cryptomania", where P ≠ NP and generating hard instances of problems outside P is easy, with three intermediate possibilities reflecting different possible distributions of difficulty over instances of NP-hard problems. The "world" where P ≠ NP but all problems in NP are tractable in the average case is called "Heuristica" in the paper. A Princeton University workshop in 2009 studied the status of the five worlds.
Results about difficulty of proof
Although the P = NP problem itself remains open despite a million-dollar prize and a huge amount of dedicated research, efforts to solve the problem have led to several new techniques. In particular, some of the most fruitful research related to the P = NP problem has been in showing that existing proof techniques are insufficient for answering the question, suggesting novel technical approaches are required.
As additional evidence for the difficulty of the problem, essentially all known proof techniques in computational complexity theory fall into one of the following classifications, all insufficient to prove P ≠ NP:
These barriers are another reason why NP-complete problems are useful: if a polynomial-time algorithm can be demonstrated for an NP-complete problem, this would solve the P = NP problem in a way not excluded by the above results.
These barriers lead some computer scientists to suggest the P versus NP problem may be independent of standard axiom systems like ZFC (cannot be proved or disproved within them). An independence result could imply that either P ≠ NP and this is unprovable in (e.g.) ZFC, or that P = NP but it is unprovable in ZFC that any polynomial-time algorithms are correct. However, if the problem is undecidable even with much weaker assumptions extending the Peano axioms for integer arithmetic, then nearly polynomial-time algorithms exist for all NP problems. Therefore, assuming (as most complexity theorists do) some NP problems don't have efficient algorithms, proofs of independence with those techniques are impossible. This also implies proving independence from PA or ZFC with current techniques is no easier than proving all NP problems have efficient algorithms.
Logical characterizations
The P = NP problem can be restated as certain classes of logical statements, as a result of work in descriptive complexity.
Consider all languages of finite structures with a fixed signature including a linear order relation. Then, all such languages in P are expressible in first-order logic with the addition of a suitable least fixed-point combinator. Recursive functions can be defined with this and the order relation. As long as the signature contains at least one predicate or function in addition to the distinguished order relation, so that the amount of space taken to store such finite structures is actually polynomial in the number of elements in the structure, this precisely characterizes P.
Similarly, NP is the set of languages expressible in existential second-order logic—that is, second-order logic restricted to exclude universal quantification over relations, functions, and subsets. The languages in the polynomial hierarchy, PH, correspond to all of second-order logic. Thus, the question "is P a proper subset of NP" can be reformulated as "is existential second-order logic able to describe languages (of finite linearly ordered structures with nontrivial signature) that first-order logic with least fixed point cannot?". The word "existential" can even be dropped from the previous characterization, since P = NP if and only if P = PH (as the former would establish that NP = co-NP, which in turn implies that NP = PH).
Polynomial-time algorithms
No known algorithm for a NP-complete problem runs in polynomial time. However, there are algorithms known for NP-complete problems that if P = NP, the algorithm runs in polynomial time on accepting instances (although with enormous constants, making the algorithm impractical). However, these algorithms do not qualify as polynomial time because their running time on rejecting instances are not polynomial. The following algorithm, due to Levin (without any citation), is such an example below. It correctly accepts the NP-complete language SUBSET-SUM. It runs in polynomial time on inputs that are in SUBSET-SUM if and only if P = NP:
// Algorithm that accepts the NP-complete language SUBSET-SUM.
//
// this is a polynomial-time algorithm if and only if P = NP.
//
// "Polynomial-time" means it returns "yes" in polynomial time when
// the answer should be "yes", and runs forever when it is "no".
//
// Input: S = a finite set of integers
// Output: "yes" if any subset of S adds up to 0.
// Runs forever with no output otherwise.
// Note: "Program number M" is the program obtained by
// writing the integer M in binary, then
// considering that string of bits to be a
// program. Every possible program can be
// generated this way, though most do nothing
// because of syntax errors.
FOR K = 1...∞
FOR M = 1...K
Run program number M for K steps with input S
IF the program outputs a list of distinct integers
AND the integers are all in S
AND the integers sum to 0
THEN
OUTPUT "yes" and HALT
This is a polynomial-time algorithm accepting an NP-complete language only if P = NP. "Accepting" means it gives "yes" answers in polynomial time, but is allowed to run forever when the answer is "no" (also known as a semi-algorithm).
This algorithm is enormously impractical, even if P = NP. If the shortest program that can solve SUBSET-SUM in polynomial time is b bits long, the above algorithm will try at least other programs first.
Formal definitions
P and NP
A decision problem is a problem that takes as input some string w over an alphabet Σ, and outputs "yes" or "no". If there is an algorithm (say a Turing machine, or a computer program with unbounded memory) that produces the correct answer for any input string of length n in at most cnk steps, where k and c are constants independent of the input string, then we say that the problem can be solved in polynomial time and we place it in the class P. Formally, P is the set of languages that can be decided by a deterministic polynomial-time Turing machine. Meaning,
where
and a deterministic polynomial-time Turing machine is a deterministic Turing machine M that satisfies two conditions:
M halts on all inputs w and
there exists such that , where O refers to the big O notation and
NP can be defined similarly using nondeterministic Turing machines (the traditional way). However, a modern approach uses the concept of certificate and verifier. Formally, NP is the set of languages with a finite alphabet and verifier that runs in polynomial time. The following defines a "verifier":
Let L be a language over a finite alphabet, Σ.
L ∈ NP if, and only if, there exists a binary relation and a positive integer k such that the following two conditions are satisfied:
For all , such that (x, y) ∈ R and ; and
the language over is decidable by a deterministic Turing machine in polynomial time.
A Turing machine that decides LR is called a verifier for L and a y such that (x, y) ∈ R is called a certificate of membership of x in L.
Not all verifiers must be polynomial-time. However, for L to be in NP, there must be a verifier that runs in polynomial time.
Example
Let
Whether a value of x is composite is equivalent to of whether x is a member of COMPOSITE. It can be shown that COMPOSITE ∈ NP by verifying that it satisfies the above definition (if we identify natural numbers with their binary representations).
COMPOSITE also happens to be in P, a fact demonstrated by the invention of the AKS primality test.
NP-completeness
There are many equivalent ways of describing NP-completeness.
Let L be a language over a finite alphabet Σ.
L is NP-complete if, and only if, the following two conditions are satisfied:
L ∈ NP; and
any L''' in NP is polynomial-time-reducible to L (written as ), where if, and only if, the following two conditions are satisfied:
There exists f : Σ* → Σ* such that for all w in Σ* we have: ; and
there exists a polynomial-time Turing machine that halts with f(w) on its tape on any input w.
Alternatively, if L ∈ NP, and there is another NP-complete problem that can be polynomial-time reduced to L, then L is NP-complete. This is a common way of proving some new problem is NP-complete.
Claimed solutions
While the P versus NP problem is generally considered unsolved, many amateur and some professional researchers have claimed solutions. Gerhard J. Woeginger compiled a list of 116 purported proofs from 1986 to 2016, of which 61 were proofs of P = NP, 49 were proofs of P ≠ NP, and 6 proved other results, e.g. that the problem is undecidable. Some attempts at resolving P versus NP have received brief media attention, though these attempts have been refuted.
Popular culture
The film Travelling Salesman, by director Timothy Lanzone, is the story of four mathematicians hired by the US government to solve the P versus NP problem.
In the sixth episode of The Simpsons seventh season "Treehouse of Horror VI", the equation P = NP is seen shortly after Homer accidentally stumbles into the "third dimension".
In the second episode of season 2 of Elementary, "Solve for X" Sherlock and Watson investigate the murders of mathematicians who were attempting to solve P versus NP.
Similar problems
R vs. RE problem, where R is analog of class P, and RE is analog class NP. These classes are not equal, because undecidable but verifiable problems do exist, for example, Hilbert's tenth problem which is RE-complete.
A similar problem exists in the theory of algebraic complexity: VP vs. VNP'' problem. This problem has not been solved yet.
See also
Game complexity
List of unsolved problems in mathematics
Unique games conjecture
Unsolved problems in computer science
Notes
References
Sources
Further reading
Online drafts
External links
Aviad Rubinstein's Hardness of Approximation Between P and NP, winner of the ACM's 2017 Doctoral Dissertation Award.
1956 in computing
Computer-related introductions in 1956
Conjectures
Mathematical optimization
Millennium Prize Problems
Structural complexity theory
Unsolved problems in computer science
Unsolved problems in mathematics | P versus NP problem | [
"Mathematics"
] | 6,469 | [
"Mathematical analysis",
"Unsolved problems in mathematics",
"Unsolved problems in computer science",
"Conjectures",
"Millennium Prize Problems",
"Mathematical optimization",
"Mathematical problems"
] |
6,118 | https://en.wikipedia.org/wiki/Carnot%20heat%20engine | A Carnot heat engine is a theoretical heat engine that operates on the Carnot cycle. The basic model for this engine was developed by Nicolas Léonard Sadi Carnot in 1824. The Carnot engine model was graphically expanded by Benoît Paul Émile Clapeyron in 1834 and mathematically explored by Rudolf Clausius in 1857, work that led to the fundamental thermodynamic concept of entropy. The Carnot engine is the most efficient heat engine which is theoretically possible. The efficiency depends only upon the absolute temperatures of the hot and cold heat reservoirs between which it operates.
A heat engine acts by transferring energy from a warm region to a cool region of space and, in the process, converting some of that energy to mechanical work. The cycle may also be reversed. The system may be worked upon by an external force, and in the process, it can transfer thermal energy from a cooler system to a warmer one, thereby acting as a refrigerator or heat pump rather than a heat engine.
Every thermodynamic system exists in a particular state. A thermodynamic cycle occurs when a system is taken through a series of different states, and finally returned to its initial state. In the process of going through this cycle, the system may perform work on its surroundings, thereby acting as a heat engine.
The Carnot engine is a theoretical construct, useful for exploring the efficiency limits of other heat engines. An actual Carnot engine, however, would be completely impractical to build.
Carnot's diagram
In the adjacent diagram, from Carnot's 1824 work, Reflections on the Motive Power of Fire, there are "two bodies A and B, kept each at a constant temperature, that of A being higher than that of B. These two bodies to which we can give, or from which we can remove the heat without causing their temperatures to vary, exercise the functions of two unlimited reservoirs of caloric. We will call the first the furnace and the second the refrigerator." Carnot then explains how we can obtain motive power, i.e., "work", by carrying a certain quantity of heat from body A to body B.
It also acts as a cooler and hence can also act as a refrigerator.
Modern diagram
The previous image shows the original piston-and-cylinder diagram used by Carnot in discussing his ideal engine. The figure at right shows a block diagram of a generic heat engine, such as the Carnot engine. In the diagram, the "working body" (system), a term introduced by Clausius in 1850, can be any fluid or vapor body through which heat Q can be introduced or transmitted to produce work. Carnot had postulated that the fluid body could be any substance capable of expansion, such as vapor of water, vapor of alcohol, vapor of mercury, a permanent gas, air, etc. Although in those early years, engines came in a number of configurations, typically QH was supplied by a boiler, wherein water was boiled over a furnace; QC was typically removed by a stream of cold flowing water in the form of a condenser located on a separate part of the engine. The output work, W, is transmitted by the movement of the piston as it is used to turn a crank-arm, which in turn was typically used to power a pulley so as to lift water out of flooded salt mines. Carnot defined work as "weight lifted through a height".
Carnot cycle
The Carnot cycle when acting as a heat engine consists of the following steps:
Reversible isothermal expansion of the gas at the "hot" temperature, (isothermal heat addition or absorption). During this step ( to ) the gas is allowed to expand and it does work on the surroundings. The temperature of the gas (the system) does not change during the process, and thus the expansion is isothermic. The gas expansion is propelled by absorption of heat energy and of entropy from the high temperature reservoir.
Isentropic (reversible adiabatic) expansion of the gas (isentropic work output). For this step ( to ) the piston and cylinder are assumed to be thermally insulated, thus they neither gain nor lose heat. The gas continues to expand, doing work on the surroundings, and losing an equivalent amount of internal energy. The gas expansion causes it to cool to the "cold" temperature, . The entropy remains unchanged.
Reversible isothermal compression of the gas at the "cold" temperature, (isothermal heat rejection) ( to ). Now the gas is exposed to the cold temperature reservoir while the surroundings do work on the gas by compressing it (such as through the return compression of a piston), while causing an amount of waste heat (with the standard sign convention for heat) and of entropy to flow out of the gas to the low temperature reservoir. (In magnitude, this is the same amount of entropy absorbed in step 1. The entropy decreases in isothermal compression since the multiplicity of the system decreases with the volume.) In terms of magnitude, the recompression work performed by the surroundings in this step is less than the work performed on the surroundings in step 1 because it occurs at a lower pressure due to the lower temperature (i.e. the resistance to compression is lower under step 3 than the force of expansion under step 1). We can refer to the first law of thermodynamics to explain this behavior: .
Isentropic compression of the gas (isentropic work input) ( to ). Once again the piston and cylinder are assumed to be thermally insulated and the cold temperature reservoir is removed. During this step, the surroundings continue to do work to further compress the gas and both the temperature and pressure rise now that the heat sink has been removed. This additional work increases the internal energy of the gas, compressing it and causing the temperature to rise to . The entropy remains unchanged. At this point the gas is in the same state as at the start of step 1.
Carnot's theorem
Carnot's theorem is a formal statement of this fact: No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between the same reservoirs.
Explanation
This maximum efficiency is defined as above:
is the work done by the system (energy exiting the system as work),
is the heat put into the system (heat energy entering the system),
is the absolute temperature of the cold reservoir, and
is the absolute temperature of the hot reservoir.
A corollary to Carnot's theorem states that: All reversible engines operating between the same heat reservoirs are equally efficient.
It is easily shown that the efficiency is maximum when the entire cyclic process is a reversible process. This means the total entropy of system and surroundings (the entropies of the hot furnace, the "working fluid" of the heat engine, and the cold sink) remains constant when the "working fluid" completes one cycle and returns to its original state. (In the general and more realistic case of an irreversible process, the total entropy of this combined system would increase.)
Since the "working fluid" comes back to the same state after one cycle, and entropy of the system is a state function, the change in entropy of the "working fluid" system is 0. Thus, it implies that the total entropy change of the furnace and sink is zero, for the process to be reversible and the efficiency of the engine to be maximum. This derivation is carried out in the next section.
The coefficient of performance (COP) of the heat engine is the reciprocal of its efficiency.
Efficiency of real heat engines
For a real heat engine, the total thermodynamic process is generally irreversible. The working fluid is brought back to its initial state after one cycle, and thus the change of entropy of the fluid system is 0, but the sum of the entropy changes in the hot and cold reservoir in this one cyclical process is greater than 0.
The internal energy of the fluid is also a state variable, so its total change in one cycle is 0. So the total work done by the system is equal to the net heat put into the system, the sum of > 0 taken up and the waste heat < 0 given off:
For real engines, stages 1 and 3 of the Carnot cycle, in which heat is absorbed by the "working fluid" from the hot reservoir, and released by it to the cold reservoir, respectively, no longer remain ideally reversible, and there is a temperature differential between the temperature of the reservoir and the temperature of the fluid while heat exchange takes place.
During heat transfer from the hot reservoir at to the fluid, the fluid would have a slightly lower temperature than , and the process for the fluid may not necessarily remain isothermal.
Let be the total entropy change of the fluid in the process of intake of heat.
where the temperature of the fluid is always slightly lesser than , in this process.
So, one would get:
Similarly, at the time of heat injection from the fluid to the cold reservoir one would have, for the magnitude of total entropy change < 0 of the fluid in the process of expelling heat:
where, during this process of transfer of heat to the cold reservoir, the temperature of the fluid is always slightly greater than .
We have only considered the magnitude of the entropy change here. Since the total change of entropy of the fluid system for the cyclic process is 0, we must have
The previous three equations, namely (), (), (), substituted into () to give:
For [ΔSh ≥ (Qh/Th)] +[ΔSc ≥ (Qc/Tc)] = 0
[ΔSh ≥ (Qh/Th)] = - [ΔSc ≥ (Qc/Tc)]
= [-ΔSc ≤ (-Qc/Tc)]
it is at least (Qh/Th) ≤ (-Qc/Tc)
Equations () and () combine to give
To derive this step needs two adiabatic processes involved to show an isentropic process property for the ratio of the changing volumes of two isothermal processes are equal.
Most importantly, since the two adiabatic processes are volume works without heat lost, and since the ratio of volume changes for this two processes are the same, so the works for these two adiabatic processes are the same with opposite direction to each other, namely, one direction is work done by the system and the other is work done on the system; therefore, heat efficiency only concerns the amount of work done by the heat absorbed comparing to the amount of heat absorbed by the system.
Therefore, (W/Qh) = (Qh - Qc) / Qh
= 1 - (Qc/Qh)
= 1 - (Tc/Th)
And, from ()
(Qh/Th) ≤ (-Qc/Tc) here Qc it is less than 0 (release heat)
(Tc/Th) ≤ (-Qc/Qh)
-(Tc/Th) ≥ (Qc/Qh)
1+[-(Tc/Th)] ≥ 1+(Qc/Qh)
1 - (Tc/Th) ≥ (Qh + Qc)/Qh here Qc<0,
1 - (Tc/Th) ≥ (Qh - Qc)/Qh
1 - (Tc/Th) ≥ W/Qh
Hence,
where is the efficiency of the real engine, and is the efficiency of the Carnot engine working between the same two reservoirs at the temperatures and . For the Carnot engine, the entire process is 'reversible', and Equation () is an equality. Hence, the efficiency of the real engine is always less than the ideal Carnot engine.
Equation () signifies that the total entropy of system and surroundings (the fluid and the two reservoirs) increases for the real engine, because (in a surroundings-based analysis) the entropy gain of the cold reservoir as flows into it at the fixed temperature , is greater than the entropy loss of the hot reservoir as leaves it at its fixed temperature . The inequality in Equation () is essentially the statement of the Clausius theorem.
According to the second theorem, "The efficiency of the Carnot engine is independent of the nature of the working substance".
The Carnot engine and Rudolf Diesel
In 1892 Rudolf Diesel patented an internal combustion engine inspired by the Carnot engine. Diesel knew a Carnot engine is an ideal that cannot be built, but he thought he had invented a working approximation. His principle was unsound, but in his struggle to implement it he developed a practical Diesel engine.
The conceptual problem was how to achieve isothermal expansion in an internal combustion engine, since burning fuel at the highest temperature of the cycle would only raise the temperature further. Diesel's patented solution was: having achieved the highest temperature just by compressing the air, to add a small amount of fuel at a controlled rate, such that heating caused by burning the fuel would be counteracted by cooling caused by air expansion as the piston moved. Hence all the heat from the fuel would be transformed into work during the isothermal expansion, as required by Carnot's theorem.
For the idea to work a small mass of fuel would have to be burnt in a huge mass of air. Diesel first proposed a working engine that would compress air to 250 atmospheres at , then cycle to one atmosphere at . However, this was well beyond the technological capabilities of the day, since it implied a compression ratio of 60:1. Such an engine, if it could have been built, would have had an efficiency of 73%. (In contrast, the best steam engines of his day achieved 7%.)
Accordingly, Diesel sought to compromise. He calculated that, were he to reduce the peak pressure to a less ambitious 90 atmospheres, he would sacrifice only 5% of the thermal efficiency. Seeking financial support, he published the "Theory and Construction of a Rational Heat Engine to Take the Place of the Steam Engine and All Presently Known Combustion Engines" (1893). Endorsed by scientific opinion, including Lord Kelvin, he won the backing of Krupp and . He clung to the Carnot cycle as a symbol. But years of practical work failed to achieve an isothermal combustion engine, nor could have done, since it requires such an enormous quantity of air that it cannot develop enough power to compress it. Furthermore, controlled fuel injection turned out to be no easy matter.
Even so, the Diesel engine slowly evolved over 25 years to become a practical high-compression air engine, its fuel injected near the end of the compression stroke and ignited by the heat of compression, capable by 1969 of 40% efficiency.
As a macroscopic construct
The Carnot heat engine is, ultimately, a theoretical construct based on an idealized thermodynamic system. On a practical human-scale level the Carnot cycle has proven a valuable model, as in advancing the development of the diesel engine. However, on a macroscopic scale limitations placed by the model's assumptions prove it impractical, and, ultimately, incapable of doing any work. As such, per Carnot's theorem, the Carnot engine may be thought as the theoretical limit of macroscopic scale heat engines rather than any practical device that could ever be built.
For example, for the isothermal expansion part of the Carnot cycle, the following
infinitesimal conditions must be satisfied simultaneously at every step in the expansion:
The hot reservoir temperature TH is infinitesimally higher than the system gas temperature T so heat flow (energy transfer) from the hot reservoir to the gas is made without increasing T (via infinitesimal work on the surroundings by the gas as another energy transfer); if TH is significantly higher than T, then T may be not uniform through the gas so the system would deviate from thermal equilibrium as well as not being a reversible process (i.e. not a Carnot cycle) or T might increase noticeably so it would not be an isothermal process.
The force externally applied on the piston (opposite to the internal force on the piston by the gas) needs to be infinitesimally reduced externally. Without this assistance, it would not be possible to follow a gas PV (Pressure-Volume) curve downward at a constant T since following this curve means that the gas-to-piston force decreases (P decreases) as the volume expands (the piston moves outward). If this assistance is so strong that the volume expansion is significant, the system may deviate from thermal equilibrium, and the process fail to be reversible (and thus not a Carnot cycle).
Such "infinitesimal" requirements as these (and others) cause the Carnot cycle to take an infinite amount of time, rendering the production of work impossible.
Other practical requirements that make the Carnot cycle impractical to realize include fine control of the gas, and perfect thermal contact with the surroundings (including high and low temperature reservoirs).
Notes
External links
References
(First Edition 1824) and (Reissued Edition of 1878)
(full text of 1897 ed.) (Archived HTML version)
Engines
Thermodynamic cycles | Carnot heat engine | [
"Physics",
"Technology"
] | 3,517 | [
"Physical systems",
"Machines",
"Engines"
] |
6,122 | https://en.wikipedia.org/wiki/Continuous%20function | In mathematics, a continuous function is a function such that a small variation of the argument induces a small variation of the value of the function. This implies there are no abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is . Until the 19th century, mathematicians largely relied on intuitive notions of continuity and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity.
Continuity is one of the core concepts of calculus and mathematical analysis, where arguments and values of functions are real and complex numbers. The concept has been generalized to functions between metric spaces and between topological spaces. The latter are the most general continuous functions, and their definition is the basis of topology.
A stronger form of continuity is uniform continuity. In order theory, especially in domain theory, a related concept of continuity is Scott continuity.
As an example, the function denoting the height of a growing flower at time would be considered continuous. In contrast, the function denoting the amount of money in a bank account at time would be considered discontinuous since it "jumps" at each point in time when money is deposited or withdrawn.
History
A form of the epsilon–delta definition of continuity was first given by Bernard Bolzano in 1817. Augustin-Louis Cauchy defined continuity of as follows: an infinitely small increment of the independent variable x always produces an infinitely small change of the dependent variable y (see e.g. Cours d'Analyse, p. 34). Cauchy defined infinitely small quantities in terms of variable quantities, and his definition of continuity closely parallels the infinitesimal definition used today (see microcontinuity). The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s, but the work wasn't published until the 1930s. Like Bolzano, Karl Weierstrass denied continuity of a function at a point c unless it was defined at and on both sides of c, but Édouard Goursat allowed the function to be defined only at and on one side of c, and Camille Jordan allowed it even if the function was defined only at c. All three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of uniform continuity in 1872, but based these ideas on lectures given by Peter Gustav Lejeune Dirichlet in 1854.
Real functions
Definition
A real function that is a function from real numbers to real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve whose domain is the entire real line. A more mathematically rigorous definition is given below.
Continuity of real functions is usually defined in terms of limits. A function with variable is continuous at the real number , if the limit of as tends to , is equal to
There are several different definitions of the (global) continuity of a function, which depend on the nature of its domain.
A function is continuous on an open interval if the interval is contained in the function's domain and the function is continuous at every interval point. A function that is continuous on the interval (the whole real line) is often called simply a continuous function; one also says that such a function is continuous everywhere. For example, all polynomial functions are continuous everywhere.
A function is continuous on a semi-open or a closed interval; if the interval is contained in the domain of the function, the function is continuous at every interior point of the interval, and the value of the function at each endpoint that belongs to the interval is the limit of the values of the function when the variable tends to the endpoint from the interior of the interval. For example, the function is continuous on its whole domain, which is the closed interval
Many commonly encountered functions are partial functions that have a domain formed by all real numbers, except some isolated points. Examples include the reciprocal function and the tangent function When they are continuous on their domain, one says, in some contexts, that they are continuous, although they are not continuous everywhere. In other contexts, mainly when one is interested in their behavior near the exceptional points, one says they are discontinuous.
A partial function is discontinuous at a point if the point belongs to the topological closure of its domain, and either the point does not belong to the domain of the function or the function is not continuous at the point. For example, the functions and are discontinuous at , and remain discontinuous whichever value is chosen for defining them at . A point where a function is discontinuous is called a discontinuity.
Using mathematical notation, several ways exist to define continuous functions in the three senses mentioned above.
Let be a function defined on a subset of the set of real numbers.
This subset is the domain of . Some possible choices include
: i.e., is the whole set of real numbers. or, for and real numbers,
: is a closed interval, or
: is an open interval.
In the case of the domain being defined as an open interval, and do not belong to , and the values of and do not matter for continuity on .
Definition in terms of limits of functions
The function is continuous at some point of its domain if the limit of as x approaches c through the domain of f, exists and is equal to In mathematical notation, this is written as
In detail this means three conditions: first, has to be defined at (guaranteed by the requirement that is in the domain of ). Second, the limit of that equation has to exist. Third, the value of this limit must equal
(Here, we have assumed that the domain of f does not have any isolated points.)
Definition in terms of neighborhoods
A neighborhood of a point c is a set that contains, at least, all points within some fixed distance of c. Intuitively, a function is continuous at a point c if the range of f over the neighborhood of c shrinks to a single point as the width of the neighborhood around c shrinks to zero. More precisely, a function f is continuous at a point c of its domain if, for any neighborhood there is a neighborhood in its domain such that whenever
As neighborhoods are defined in any topological space, this definition of a continuous function applies not only for real functions but also when the domain and the codomain are topological spaces and is thus the most general definition. It follows that a function is automatically continuous at every isolated point of its domain. For example, every real-valued function on the integers is continuous.
Definition in terms of limits of sequences
One can instead require that for any sequence of points in the domain which converges to c, the corresponding sequence converges to In mathematical notation,
Weierstrass and Jordan definitions (epsilon–delta) of continuous functions
Explicitly including the definition of the limit of a function, we obtain a self-contained definition: Given a function as above and an element of the domain , is said to be continuous at the point when the following holds: For any positive real number however small, there exists some positive real number such that for all in the domain of with the value of satisfies
Alternatively written, continuity of at means that for every there exists a such that for all :
More intuitively, we can say that if we want to get all the values to stay in some small neighborhood around we need to choose a small enough neighborhood for the values around If we can do that no matter how small the neighborhood is, then is continuous at
In modern terms, this is generalized by the definition of continuity of a function with respect to a basis for the topology, here the metric topology.
Weierstrass had required that the interval be entirely within the domain , but Jordan removed that restriction.
Definition in terms of control of the remainder
In proofs and numerical analysis, we often need to know how fast limits are converging, or in other words, control of the remainder. We can formalize this to a definition of continuity.
A function is called a control function if
C is non-decreasing
A function is C-continuous at if there exists such a neighbourhood that
A function is continuous in if it is C-continuous for some control function C.
This approach leads naturally to refining the notion of continuity by restricting the set of admissible control functions. For a given set of control functions a function is if it is for some For example, the Lipschitz and Hölder continuous functions of exponent below are defined by the set of control functions
respectively
Definition using oscillation
Continuity can also be defined in terms of oscillation: a function f is continuous at a point if and only if its oscillation at that point is zero; in symbols, A benefit of this definition is that it discontinuity: the oscillation gives how the function is discontinuous at a point.
This definition is helpful in descriptive set theory to study the set of discontinuities and continuous points – the continuous points are the intersection of the sets where the oscillation is less than (hence a set) – and gives a rapid proof of one direction of the Lebesgue integrability condition.
The oscillation is equivalent to the definition by a simple re-arrangement and by using a limit (lim sup, lim inf) to define oscillation: if (at a given point) for a given there is no that satisfies the definition, then the oscillation is at least and conversely if for every there is a desired the oscillation is 0. The oscillation definition can be naturally generalized to maps from a topological space to a metric space.
Definition using the hyperreals
Cauchy defined the continuity of a function in the following intuitive terms: an infinitesimal change in the independent variable corresponds to an infinitesimal change of the dependent variable (see Cours d'analyse, page 34). Non-standard analysis is a way of making this mathematically rigorous. The real line is augmented by adding infinite and infinitesimal numbers to form the hyperreal numbers. In nonstandard analysis, continuity can be defined as follows.
(see microcontinuity). In other words, an infinitesimal increment of the independent variable always produces an infinitesimal change of the dependent variable, giving a modern expression to Augustin-Louis Cauchy's definition of continuity.
Construction of continuous functions
Checking the continuity of a given function can be simplified by checking one of the above defining properties for the building blocks of the given function. It is straightforward to show that the sum of two functions, continuous on some domain, is also continuous on this domain. Given
then the
(defined by for all ) is continuous in
The same holds for the ,
(defined by for all )
is continuous in
Combining the above preservations of continuity and the continuity of constant functions and of the identity function one arrives at the continuity of all polynomial functions such as
(pictured on the right).
In the same way, it can be shown that the
(defined by for all such that )
is continuous in
This implies that, excluding the roots of the
(defined by for all , such that )
is also continuous on .
For example, the function (pictured)
is defined for all real numbers and is continuous at every such point. Thus, it is a continuous function. The question of continuity at does not arise since is not in the domain of There is no continuous function that agrees with for all
Since the function sine is continuous on all reals, the sinc function is defined and continuous for all real However, unlike the previous example, G be extended to a continuous function on real numbers, by the value to be 1, which is the limit of when x approaches 0, i.e.,
Thus, by setting
the sinc-function becomes a continuous function on all real numbers. The term is used in such cases when (re)defining values of a function to coincide with the appropriate limits make a function continuous at specific points.
A more involved construction of continuous functions is the function composition. Given two continuous functions
their composition, denoted as
and defined by is continuous.
This construction allows stating, for example, that
is continuous for all
Examples of discontinuous functions
An example of a discontinuous function is the Heaviside step function , defined by
Pick for instance . Then there is no around , i.e. no open interval with that will force all the values to be within the of , i.e. within . Intuitively, we can think of this type of discontinuity as a sudden jump in function values.
Similarly, the signum or sign function
is discontinuous at but continuous everywhere else. Yet another example: the function
is continuous everywhere apart from .
Besides plausible continuities and discontinuities like above, there are also functions with a behavior, often coined pathological, for example, Thomae's function,
is continuous at all irrational numbers and discontinuous at all rational numbers. In a similar vein, Dirichlet's function, the indicator function for the set of rational numbers,
is nowhere continuous.
Properties
A useful lemma
Let be a function that is continuous at a point and be a value such Then throughout some neighbourhood of
Proof: By the definition of continuity, take , then there exists such that
Suppose there is a point in the neighbourhood for which then we have the contradiction
Intermediate value theorem
The intermediate value theorem is an existence theorem, based on the real number property of completeness, and states:
If the real-valued function f is continuous on the closed interval and k is some number between and then there is some number such that
For example, if a child grows from 1 m to 1.5 m between the ages of two and six years, then, at some time between two and six years of age, the child's height must have been 1.25 m.
As a consequence, if f is continuous on and and differ in sign, then, at some point must equal zero.
Extreme value theorem
The extreme value theorem states that if a function f is defined on a closed interval (or any closed and bounded set) and is continuous there, then the function attains its maximum, i.e. there exists with for all The same is true of the minimum of f. These statements are not, in general, true if the function is defined on an open interval (or any set that is not both closed and bounded), as, for example, the continuous function defined on the open interval (0,1), does not attain a maximum, being unbounded above.
Relation to differentiability and integrability
Every differentiable function
is continuous, as can be shown. The converse does not hold: for example, the absolute value function
is everywhere continuous. However, it is not differentiable at (but is so everywhere else). Weierstrass's function is also everywhere continuous but nowhere differentiable.
The derivative f′(x) of a differentiable function f(x) need not be continuous. If f′(x) is continuous, f(x) is said to be continuously differentiable. The set of such functions is denoted More generally, the set of functions
(from an open interval (or open subset of ) to the reals) such that f is times differentiable and such that the -th derivative of f is continuous is denoted See differentiability class. In the field of computer graphics, properties related (but not identical) to are sometimes called (continuity of position), (continuity of tangency), and (continuity of curvature); see Smoothness of curves and surfaces.
Every continuous function
is integrable (for example in the sense of the Riemann integral). The converse does not hold, as the (integrable but discontinuous) sign function shows.
Pointwise and uniform limits
Given a sequence
of functions such that the limit
exists for all , the resulting function is referred to as the pointwise limit of the sequence of functions The pointwise limit function need not be continuous, even if all functions are continuous, as the animation at the right shows. However, f is continuous if all functions are continuous and the sequence converges uniformly, by the uniform convergence theorem. This theorem can be used to show that the exponential functions, logarithms, square root function, and trigonometric functions are continuous.
Directional Continuity
Discontinuous functions may be discontinuous in a restricted way, giving rise to the concept of directional continuity (or right and left continuous functions) and semi-continuity. Roughly speaking, a function is if no jump occurs when the limit point is approached from the right. Formally, f is said to be right-continuous at the point c if the following holds: For any number however small, there exists some number such that for all x in the domain with the value of will satisfy
This is the same condition as continuous functions, except it is required to hold for x strictly larger than c only. Requiring it instead for all x with yields the notion of functions. A function is continuous if and only if it is both right-continuous and left-continuous.
Semicontinuity
A function f is if, roughly, any jumps that might occur only go down, but not up. That is, for any there exists some number such that for all x in the domain with the value of satisfies
The reverse condition is .
Continuous functions between metric spaces
The concept of continuous real-valued functions can be generalized to functions between metric spaces. A metric space is a set equipped with a function (called metric) that can be thought of as a measurement of the distance of any two elements in X. Formally, the metric is a function
that satisfies a number of requirements, notably the triangle inequality. Given two metric spaces and and a function
then is continuous at the point (with respect to the given metrics) if for any positive real number there exists a positive real number such that all satisfying will also satisfy As in the case of real functions above, this is equivalent to the condition that for every sequence in with limit we have The latter condition can be weakened as follows: is continuous at the point if and only if for every convergent sequence in with limit , the sequence is a Cauchy sequence, and is in the domain of .
The set of points at which a function between metric spaces is continuous is a set – this follows from the definition of continuity.
This notion of continuity is applied, for example, in functional analysis. A key statement in this area says that a linear operator
between normed vector spaces and (which are vector spaces equipped with a compatible norm, denoted ) is continuous if and only if it is bounded, that is, there is a constant such that
for all
Uniform, Hölder and Lipschitz continuity
The concept of continuity for functions between metric spaces can be strengthened in various ways by limiting the way depends on and c in the definition above. Intuitively, a function f as above is uniformly continuous if the does
not depend on the point c. More precisely, it is required that for every real number there exists such that for every with we have that Thus, any uniformly continuous function is continuous. The converse does not generally hold but holds when the domain space X is compact. Uniformly continuous maps can be defined in the more general situation of uniform spaces.
A function is Hölder continuous with exponent α (a real number) if there is a constant K such that for all the inequality
holds. Any Hölder continuous function is uniformly continuous. The particular case is referred to as Lipschitz continuity. That is, a function is Lipschitz continuous if there is a constant K such that the inequality
holds for any The Lipschitz condition occurs, for example, in the Picard–Lindelöf theorem concerning the solutions of ordinary differential equations.
Continuous functions between topological spaces
Another, more abstract, notion of continuity is the continuity of functions between topological spaces in which there generally is no formal notion of distance, as there is in the case of metric spaces. A topological space is a set X together with a topology on X, which is a set of subsets of X satisfying a few requirements with respect to their unions and intersections that generalize the properties of the open balls in metric spaces while still allowing one to talk about the neighborhoods of a given point. The elements of a topology are called open subsets of X (with respect to the topology).
A function
between two topological spaces X and Y is continuous if for every open set the inverse image
is an open subset of X. That is, f is a function between the sets X and Y (not on the elements of the topology ), but the continuity of f depends on the topologies used on X and Y.
This is equivalent to the condition that the preimages of the closed sets (which are the complements of the open subsets) in Y are closed in X.
An extreme example: if a set X is given the discrete topology (in which every subset is open), all functions
to any topological space T are continuous. On the other hand, if X is equipped with the indiscrete topology (in which the only open subsets are the empty set and X) and the space T set is at least T0, then the only continuous functions are the constant functions. Conversely, any function whose codomain is indiscrete is continuous.
Continuity at a point
The translation in the language of neighborhoods of the -definition of continuity leads to the following definition of the continuity at a point:
This definition is equivalent to the same statement with neighborhoods restricted to open neighborhoods and can be restated in several ways by using preimages rather than images.
Also, as every set that contains a neighborhood is also a neighborhood, and is the largest subset of such that this definition may be simplified into:
As an open set is a set that is a neighborhood of all its points, a function is continuous at every point of if and only if it is a continuous function.
If X and Y are metric spaces, it is equivalent to consider the neighborhood system of open balls centered at x and f(x) instead of all neighborhoods. This gives back the above definition of continuity in the context of metric spaces. In general topological spaces, there is no notion of nearness or distance. If, however, the target space is a Hausdorff space, it is still true that f is continuous at a if and only if the limit of f as x approaches a is f(a). At an isolated point, every function is continuous.
Given a map is continuous at if and only if whenever is a filter on that converges to in which is expressed by writing then necessarily in
If denotes the neighborhood filter at then is continuous at if and only if in Moreover, this happens if and only if the prefilter is a filter base for the neighborhood filter of in
Alternative definitions
Several equivalent definitions for a topological structure exist; thus, several equivalent ways exist to define a continuous function.
Sequences and nets
In several contexts, the topology of a space is conveniently specified in terms of limit points. This is often accomplished by specifying when a point is the limit of a sequence. Still, for some spaces that are too large in some sense, one specifies also when a point is the limit of more general sets of points indexed by a directed set, known as nets. A function is (Heine-)continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition.
In detail, a function is sequentially continuous if whenever a sequence in converges to a limit the sequence converges to Thus, sequentially continuous functions "preserve sequential limits." Every continuous function is sequentially continuous. If is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if is a metric space, sequential continuity and continuity are equivalent. For non-first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve the limits of nets, and this property characterizes continuous functions.
For instance, consider the case of real-valued functions of one real variable:
Proof. Assume that is continuous at (in the sense of continuity). Let be a sequence converging at (such a sequence always exists, for example, ); since is continuous at
For any such we can find a natural number such that for all
since converges at ; combining this with we obtain
Assume on the contrary that is sequentially continuous and proceed by contradiction: suppose is not continuous at
then we can take and call the corresponding point : in this way we have defined a sequence such that
by construction but , which contradicts the hypothesis of sequential continuity.
Closure operator and interior operator definitions
In terms of the interior operator, a function between topological spaces is continuous if and only if for every subset
In terms of the closure operator, is continuous if and only if for every subset
That is to say, given any element that belongs to the closure of a subset necessarily belongs to the closure of in If we declare that a point is a subset if then this terminology allows for a plain English description of continuity: is continuous if and only if for every subset maps points that are close to to points that are close to Similarly, is continuous at a fixed given point if and only if whenever is close to a subset then is close to
Instead of specifying topological spaces by their open subsets, any topology on can alternatively be determined by a closure operator or by an interior operator.
Specifically, the map that sends a subset of a topological space to its topological closure satisfies the Kuratowski closure axioms. Conversely, for any closure operator there exists a unique topology on (specifically, ) such that for every subset is equal to the topological closure of in If the sets and are each associated with closure operators (both denoted by ) then a map is continuous if and only if for every subset
Similarly, the map that sends a subset of to its topological interior defines an interior operator. Conversely, any interior operator induces a unique topology on (specifically, ) such that for every is equal to the topological interior of in If the sets and are each associated with interior operators (both denoted by ) then a map is continuous if and only if for every subset
Filters and prefilters
Continuity can also be characterized in terms of filters. A function is continuous if and only if whenever a filter on converges in to a point then the prefilter converges in to This characterization remains true if the word "filter" is replaced by "prefilter."
Properties
If and are continuous, then so is the composition If is continuous and
X is compact, then f(X) is compact.
X is connected, then f(X) is connected.
X is path-connected, then f(X) is path-connected.
X is Lindelöf, then f(X) is Lindelöf.
X is separable, then f(X) is separable.
The possible topologies on a fixed set X are partially ordered: a topology is said to be coarser than another topology (notation: ) if every open subset with respect to is also open with respect to Then, the identity map
is continuous if and only if (see also comparison of topologies). More generally, a continuous function
stays continuous if the topology is replaced by a coarser topology and/or is replaced by a finer topology.
Homeomorphisms
Symmetric to the concept of a continuous map is an open map, for which of open sets are open. If an open map f has an inverse function, that inverse is continuous, and if a continuous map g has an inverse, that inverse is open. Given a bijective function f between two topological spaces, the inverse function need not be continuous. A bijective continuous function with a continuous inverse function is called a .
If a continuous bijection has as its domain a compact space and its codomain is Hausdorff, then it is a homeomorphism.
Defining topologies via continuous functions
Given a function
where X is a topological space and S is a set (without a specified topology), the final topology on S is defined by letting the open sets of S be those subsets A of S for which is open in X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is coarser than the final topology on S. Thus, the final topology is the finest topology on S that makes f continuous. If f is surjective, this topology is canonically identified with the quotient topology under the equivalence relation defined by f.
Dually, for a function f from a set S to a topological space X, the initial topology on S is defined by designating as an open set every subset A of S such that for some open subset U of X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is finer than the initial topology on S. Thus, the initial topology is the coarsest topology on S that makes f continuous. If f is injective, this topology is canonically identified with the subspace topology of S, viewed as a subset of X.
A topology on a set S is uniquely determined by the class of all continuous functions into all topological spaces X. Dually, a similar idea can be applied to maps
Related notions
If is a continuous function from some subset of a topological space then a of to is any continuous function such that for every which is a condition that often written as In words, it is any continuous function that restricts to on This notion is used, for example, in the Tietze extension theorem and the Hahn–Banach theorem. If is not continuous, then it could not possibly have a continuous extension. If is a Hausdorff space and is a dense subset of then a continuous extension of to if one exists, will be unique. The Blumberg theorem states that if is an arbitrary function then there exists a dense subset of such that the restriction is continuous; in other words, every function can be restricted to some dense subset on which it is continuous.
Various other mathematical domains use the concept of continuity in different but related meanings. For example, in order theory, an order-preserving function between particular types of partially ordered sets and is continuous if for each directed subset of we have Here is the supremum with respect to the orderings in and respectively. This notion of continuity is the same as topological continuity when the partially ordered sets are given the Scott topology.
In category theory, a functor
between two categories is called if it commutes with small limits. That is to say,
for any small (that is, indexed by a set as opposed to a class) diagram of objects in .
A is a generalization of metric spaces and posets, which uses the concept of quantales, and that can be used to unify the notions of metric spaces and domains.
In measure theory, a function defined on a Lebesgue measurable set is called approximately continuous at a point if the approximate limit of at exists and equals . This generalizes the notion of continuity by replacing the ordinary limit with the approximate limit. A fundamental result known as the Stepanov-Denjoy theorem states that a function is measurable if and only if it is approximately continuous almost everywhere.
See also
Continuity (mathematics)
Absolute continuity
Approximate continuity
Dini continuity
Equicontinuity
Geometric continuity
Parametric continuity
Classification of discontinuities
Coarse function
Continuous function (set theory)
Continuous stochastic process
Normal function
Open and closed maps
Piecewise
Symmetrically continuous function
Direction-preserving function - an analog of a continuous function in discrete spaces.
References
Bibliography
Calculus
Types of functions | Continuous function | [
"Mathematics"
] | 6,612 | [
"Functions and mappings",
"Calculus",
"Theory of continuous functions",
"Mathematical objects",
"Topology",
"Mathematical relations",
"Types of functions"
] |
6,123 | https://en.wikipedia.org/wiki/Curl%20%28mathematics%29 | In vector calculus, the curl, also known as rotor, is a vector operator that describes the infinitesimal circulation of a vector field in three-dimensional Euclidean space. The curl at a point in the field is represented by a vector whose length and direction denote the magnitude and axis of the maximum circulation. The curl of a field is formally defined as the circulation density at each point of the field.
A vector field whose curl is zero is called irrotational. The curl is a form of differentiation for vector fields. The corresponding form of the fundamental theorem of calculus is Stokes' theorem, which relates the surface integral of the curl of a vector field to the line integral of the vector field around the boundary curve.
The notation is more common in North America. In the rest of the world, particularly in 20th century scientific literature, the alternative notation is traditionally used, which comes from the "rate of rotation" that it represents. To avoid confusion, modern authors tend to use the cross product notation with the del (nabla) operator, as in which also reveals the relation between curl (rotor), divergence, and gradient operators.
Unlike the gradient and divergence, curl as formulated in vector calculus does not generalize simply to other dimensions; some generalizations are possible, but only in three dimensions is the geometrically defined curl of a vector field again a vector field. This deficiency is a direct consequence of the limitations of vector calculus; on the other hand, when expressed as an antisymmetric tensor field via the wedge operator of geometric calculus, the curl generalizes to all dimensions. The circumstance is similar to that attending the 3-dimensional cross product, and indeed the connection is reflected in the notation for the curl.
The name "curl" was first suggested by James Clerk Maxwell in 1871 but the concept was apparently first used in the construction of an optical field theory by James MacCullagh in 1839.
Definition
The curl of a vector field , denoted by , or , or , is an operator that maps functions in to functions in , and in particular, it maps continuously differentiable functions to continuous functions . It can be defined in several ways, to be mentioned below:
One way to define the curl of a vector field at a point is implicitly through its components along various axes passing through the point: if is any unit vector, the component of the curl of along the direction may be defined to be the limiting value of a closed line integral in a plane perpendicular to divided by the area enclosed, as the path of integration is contracted indefinitely around the point.
More specifically, the curl is defined at a point as
where the line integral is calculated along the boundary of the area in question, being the magnitude of the area. This equation defines the component of the curl of along the direction . The infinitesimal surfaces bounded by have as their normal. is oriented via the right-hand rule.
The above formula means that the component of the curl of a vector field along a certain axis is the infinitesimal area density of the circulation of the field in a plane perpendicular to that axis. This formula does not a priori define a legitimate vector field, for the individual circulation densities with respect to various axes a priori need not relate to each other in the same way as the components of a vector do; that they do indeed relate to each other in this precise manner must be proven separately.
To this definition fits naturally the Kelvin–Stokes theorem, as a global formula corresponding to the definition. It equates the surface integral of the curl of a vector field to the above line integral taken around the boundary of the surface.
Another way one can define the curl vector of a function at a point is explicitly as the limiting value of a vector-valued surface integral around a shell enclosing divided by the volume enclosed, as the shell is contracted indefinitely around .
More specifically, the curl may be defined by the vector formula
where the surface integral is calculated along the boundary of the volume , being the magnitude of the volume, and pointing outward from the surface perpendicularly at every point in .
In this formula, the cross product in the integrand measures the tangential component of at each point on the surface , and points along the surface at right angles to the tangential projection of . Integrating this cross product over the whole surface results in a vector whose magnitude measures the overall circulation of around , and whose direction is at right angles to this circulation. The above formula says that the curl of a vector field at a point is the infinitesimal volume density of this "circulation vector" around the point.
To this definition fits naturally another global formula (similar to the Kelvin-Stokes theorem) which equates the volume integral of the curl of a vector field to the above surface integral taken over the boundary of the volume.
Whereas the above two definitions of the curl are coordinate free, there is another "easy to memorize" definition of the curl in curvilinear orthogonal coordinates, e.g. in Cartesian coordinates, spherical, cylindrical, or even elliptical or parabolic coordinates:
The equation for each component can be obtained by exchanging each occurrence of a subscript 1, 2, 3 in cyclic permutation: 1 → 2, 2 → 3, and 3 → 1 (where the subscripts represent the relevant indices).
If are the Cartesian coordinates and are the orthogonal coordinates, then
is the length of the coordinate vector corresponding to . The remaining two components of curl result from cyclic permutation of indices: 3,1,2 → 1,2,3 → 2,3,1.
Usage
In practice, the two coordinate-free definitions described above are rarely used because in virtually all cases, the curl operator can be applied using some set of curvilinear coordinates, for which simpler representations have been derived.
The notation has its origins in the similarities to the 3-dimensional cross product, and it is useful as a mnemonic in Cartesian coordinates if is taken as a vector differential operator del. Such notation involving operators is common in physics and algebra.
Expanded in 3-dimensional Cartesian coordinates (see Del in cylindrical and spherical coordinates for spherical and cylindrical coordinate representations), is, for composed of (where the subscripts indicate the components of the vector, not partial derivatives):
where , , and are the unit vectors for the -, -, and -axes, respectively. This expands as follows:
Although expressed in terms of coordinates, the result is invariant under proper rotations of the coordinate axes but the result inverts under reflection.
In a general coordinate system, the curl is given by
where denotes the Levi-Civita tensor, the covariant derivative, is the determinant of the metric tensor and the Einstein summation convention implies that repeated indices are summed over. Due to the symmetry of the Christoffel symbols participating in the covariant derivative, this expression reduces to the partial derivative:
where are the local basis vectors. Equivalently, using the exterior derivative, the curl can be expressed as:
Here and are the musical isomorphisms, and is the Hodge star operator. This formula shows how to calculate the curl of in any coordinate system, and how to extend the curl to any oriented three-dimensional Riemannian manifold. Since this depends on a choice of orientation, curl is a chiral operation. In other words, if the orientation is reversed, then the direction of the curl is also reversed.
Examples
Example 1
Suppose the vector field describes the velocity field of a fluid flow (such as a large tank of liquid or gas) and a small ball is located within the fluid or gas (the center of the ball being fixed at a certain point). If the ball has a rough surface, the fluid flowing past it will make it rotate. The rotation axis (oriented according to the right hand rule) points in the direction of the curl of the field at the center of the ball, and the angular speed of the rotation is half the magnitude of the curl at this point.
The curl of the vector field at any point is given by the rotation of an infinitesimal area in the xy-plane (for z-axis component of the curl), zx-plane (for y-axis component of the curl) and yz-plane (for x-axis component of the curl vector). This can be seen in the examples below.
Example 2
The vector field
can be decomposed as
Upon visual inspection, the field can be described as "rotating". If the vectors of the field were to represent a linear force acting on objects present at that point, and an object were to be placed inside the field, the object would start to rotate clockwise around itself. This is true regardless of where the object is placed.
Calculating the curl:
The resulting vector field describing the curl would at all points be pointing in the negative direction. The results of this equation align with what could have been predicted using the right-hand rule using a right-handed coordinate system. Being a uniform vector field, the object described before would have the same rotational intensity regardless of where it was placed.
Example 3
For the vector field
the curl is not as obvious from the graph. However, taking the object in the previous example, and placing it anywhere on the line , the force exerted on the right side would be slightly greater than the force exerted on the left, causing it to rotate clockwise. Using the right-hand rule, it can be predicted that the resulting curl would be straight in the negative direction. Inversely, if placed on , the object would rotate counterclockwise and the right-hand rule would result in a positive direction.
Calculating the curl:
The curl points in the negative direction when is positive and vice versa. In this field, the intensity of rotation would be greater as the object moves away from the plane .
Further examples
In a vector field describing the linear velocities of each part of a rotating disk in uniform circular motion, the curl has the same value at all points, and this value turns out to be exactly two times the vectorial angular velocity of the disk (oriented as usual by the right-hand rule). More generally, for any flowing mass, the linear velocity vector field at each point of the mass flow has a curl (the vorticity of the flow at that point) equal to exactly two times the local vectorial angular velocity of the mass about the point.
For any solid object subject to an external physical force (such as gravity or the electromagnetic force), one may consider the vector field representing the infinitesimal force-per-unit-volume contributions acting at each of the points of the object. This force field may create a net torque on the object about its center of mass, and this torque turns out to be directly proportional and vectorially parallel to the (vector-valued) integral of the curl of the force field over the whole volume.
Of the four Maxwell's equations, two—Faraday's law and Ampère's law—can be compactly expressed using curl. Faraday's law states that the curl of an electric field is equal to the opposite of the time rate of change of the magnetic field, while Ampère's law relates the curl of the magnetic field to the current and the time rate of change of the electric field.
Identities
In general curvilinear coordinates (not only in Cartesian coordinates), the curl of a cross product of vector fields and can be shown to be
Interchanging the vector field and operator, we arrive at the cross product of a vector field with curl of a vector field:
where is the Feynman subscript notation, which considers only the variation due to the vector field (i.e., in this case, is treated as being constant in space).
Another example is the curl of a curl of a vector field. It can be shown that in general coordinates
and this identity defines the vector Laplacian of , symbolized as .
The curl of the gradient of any scalar field is always the zero vector field
which follows from the antisymmetry in the definition of the curl, and the symmetry of second derivatives.
The divergence of the curl of any vector field is equal to zero:
If is a scalar valued function and is a vector field, then
Generalizations
The vector calculus operations of grad, curl, and div are most easily generalized in the context of differential forms, which involves a number of steps. In short, they correspond to the derivatives of 0-forms, 1-forms, and 2-forms, respectively. The geometric interpretation of curl as rotation corresponds to identifying bivectors (2-vectors) in 3 dimensions with the special orthogonal Lie algebra of infinitesimal rotations (in coordinates, skew-symmetric 3 × 3 matrices), while representing rotations by vectors corresponds to identifying 1-vectors (equivalently, 2-vectors) and these all being 3-dimensional spaces.
Differential forms
In 3 dimensions, a differential 0-form is a real-valued function ; a differential 1-form is the following expression, where the coefficients are functions:
a differential 2-form is the formal sum, again with function coefficients:
and a differential 3-form is defined by a single term with one function as coefficient:
(Here the -coefficients are real functions of three variables; the "wedge products", e.g. , can be interpreted as some kind of oriented area elements, , etc.)
The exterior derivative of a -form in is defined as the -form from above—and in if, e.g.,
then the exterior derivative leads to
The exterior derivative of a 1-form is therefore a 2-form, and that of a 2-form is a 3-form. On the other hand, because of the interchangeability of mixed derivatives,
and antisymmetry,
the twofold application of the exterior derivative yields (the zero -form).
Thus, denoting the space of -forms by and the exterior derivative by one gets a sequence:
Here is the space of sections of the exterior algebra vector bundle over Rn, whose dimension is the binomial coefficient ; note that for or . Writing only dimensions, one obtains a row of Pascal's triangle:
the 1-dimensional fibers correspond to scalar fields, and the 3-dimensional fibers to vector fields, as described below. Modulo suitable identifications, the three nontrivial occurrences of the exterior derivative correspond to grad, curl, and div.
Differential forms and the differential can be defined on any Euclidean space, or indeed any manifold, without any notion of a Riemannian metric. On a Riemannian manifold, or more generally pseudo-Riemannian manifold, -forms can be identified with -vector fields (-forms are -covector fields, and a pseudo-Riemannian metric gives an isomorphism between vectors and covectors), and on an oriented vector space with a nondegenerate form (an isomorphism between vectors and covectors), there is an isomorphism between -vectors and -vectors; in particular on (the tangent space of) an oriented pseudo-Riemannian manifold. Thus on an oriented pseudo-Riemannian manifold, one can interchange -forms, -vector fields, -forms, and -vector fields; this is known as Hodge duality. Concretely, on this is given by:
1-forms and 1-vector fields: the 1-form corresponds to the vector field .
1-forms and 2-forms: one replaces by the dual quantity (i.e., omit ), and likewise, taking care of orientation: corresponds to , and corresponds to . Thus the form corresponds to the "dual form" .
Thus, identifying 0-forms and 3-forms with scalar fields, and 1-forms and 2-forms with vector fields:
grad takes a scalar field (0-form) to a vector field (1-form);
curl takes a vector field (1-form) to a pseudovector field (2-form);
div takes a pseudovector field (2-form) to a pseudoscalar field (3-form)
On the other hand, the fact that corresponds to the identities
for any scalar field , and
for any vector field .
Grad and div generalize to all oriented pseudo-Riemannian manifolds, with the same geometric interpretation, because the spaces of 0-forms and -forms at each point are always 1-dimensional and can be identified with scalar fields, while the spaces of 1-forms and -forms are always fiberwise -dimensional and can be identified with vector fields.
Curl does not generalize in this way to 4 or more dimensions (or down to 2 or fewer dimensions); in 4 dimensions the dimensions are
so the curl of a 1-vector field (fiberwise 4-dimensional) is a 2-vector field, which at each point belongs to 6-dimensional vector space, and so one has
which yields a sum of six independent terms, and cannot be identified with a 1-vector field. Nor can one meaningfully go from a 1-vector field to a 2-vector field to a 3-vector field (4 → 6 → 4), as taking the differential twice yields zero (). Thus there is no curl function from vector fields to vector fields in other dimensions arising in this way.
However, one can define a curl of a vector field as a 2-vector field in general, as described below.
Curl geometrically
2-vectors correspond to the exterior power ; in the presence of an inner product, in coordinates these are the skew-symmetric matrices, which are geometrically considered as the special orthogonal Lie algebra of infinitesimal rotations. This has dimensions, and allows one to interpret the differential of a 1-vector field as its infinitesimal rotations. Only in 3 dimensions (or trivially in 0 dimensions) we have , which is the most elegant and common case. In 2 dimensions the curl of a vector field is not a vector field but a function, as 2-dimensional rotations are given by an angle (a scalar – an orientation is required to choose whether one counts clockwise or counterclockwise rotations as positive); this is not the div, but is rather perpendicular to it. In 3 dimensions the curl of a vector field is a vector field as is familiar (in 1 and 0 dimensions the curl of a vector field is 0, because there are no non-trivial 2-vectors), while in 4 dimensions the curl of a vector field is, geometrically, at each point an element of the 6-dimensional Lie algebra
The curl of a 3-dimensional vector field which only depends on 2 coordinates (say and ) is simply a vertical vector field (in the direction) whose magnitude is the curl of the 2-dimensional vector field, as in the examples on this page.
Considering curl as a 2-vector field (an antisymmetric 2-tensor) has been used to generalize vector calculus and associated physics to higher dimensions.
Inverse
In the case where the divergence of a vector field is zero, a vector field exists such that . This is why the magnetic field, characterized by zero divergence, can be expressed as the curl of a magnetic vector potential.
If is a vector field with , then adding any gradient vector field to will result in another vector field such that as well. This can be summarized by saying that the inverse curl of a three-dimensional vector field can be obtained up to an unknown irrotational field with the Biot–Savart law.
See also
Helmholtz decomposition
Hiptmair–Xu preconditioner
Del in cylindrical and spherical coordinates
Vorticity
References
Further reading
External links
Differential operators
Linear operators in calculus
Vector calculus
Analytic geometry | Curl (mathematics) | [
"Mathematics"
] | 4,049 | [
"Mathematical analysis",
"Differential operators"
] |
6,136 | https://en.wikipedia.org/wiki/Carbon%20monoxide | Carbon monoxide (chemical formula CO) is a poisonous, flammable gas that is colorless, odorless, tasteless, and slightly less dense than air. Carbon monoxide consists of one carbon atom and one oxygen atom connected by a triple bond. It is the simplest carbon oxide. In coordination complexes, the carbon monoxide ligand is called carbonyl. It is a key ingredient in many processes in industrial chemistry.
The most common source of carbon monoxide is the partial combustion of carbon-containing compounds. Numerous environmental and biological sources generate carbon monoxide. In industry, carbon monoxide is important in the production of many compounds, including drugs, fragrances, and fuels. Upon emission into the atmosphere, carbon monoxide affects several processes that contribute to climate change.
Indoors CO is one of the most acutely toxic contaminants affecting indoor air quality. CO may be emitted from tobacco smoke and generated from malfunctioning fuel burning stoves (wood, kerosene, natural gas, propane) and fuel burning heating systems (wood, oil, natural gas) and from blocked flues connected to these appliances. Carbon monoxide poisoning is the most common type of fatal air poisoning in many countries.
Carbon monoxide has important biological roles across phylogenetic kingdoms. It is produced by many organisms, including humans. In mammalian physiology, carbon monoxide is a classical example of hormesis where low concentrations serve as an endogenous neurotransmitter (gasotransmitter) and high concentrations are toxic resulting in carbon monoxide poisoning. It is isoelectronic with both cyanide anion CN− and molecular nitrogen N2.
Physical and chemical properties
Carbon monoxide is the simplest oxocarbon and is isoelectronic with other triply bonded diatomic species possessing 10 valence electrons, including the cyanide anion, the nitrosonium cation, boron monofluoride and molecular nitrogen. It has a molar mass of 28.0, which, according to the ideal gas law, makes it slightly less dense than air, whose average molar mass is 28.8.
The carbon and oxygen are connected by a triple bond that consists of a net two pi bonds and one sigma bond. The bond length between the carbon atom and the oxygen atom is 112.8 pm. This bond length is consistent with a triple bond, as in molecular nitrogen (N2), which has a similar bond length (109.76 pm) and nearly the same molecular mass. Carbon–oxygen double bonds are significantly longer, 120.8 pm in formaldehyde, for example. The boiling point (82 K) and melting point (68 K) are very similar to those of N2 (77 K and 63 K, respectively). The bond-dissociation energy of 1072 kJ/mol is stronger than that of N2 (942 kJ/mol) and represents the strongest chemical bond known.
The ground electronic state of carbon monoxide is a singlet state since there are no unpaired electrons.
Bonding and dipole moment
The strength of the C-O bond in carbon monoxide is indicated by the high frequency of its vibration, 2143 cm−1. For comparison, organic carbonyls such as ketones and esters absorb at around 1700 cm−1.
Carbon and oxygen together have a total of 10 electrons in the valence shell. Following the octet rule for both carbon and oxygen, the two atoms form a triple bond, with six shared electrons in three bonding molecular orbitals, rather than the usual double bond found in organic carbonyl compounds. Since four of the shared electrons come from the oxygen atom and only two from carbon, one bonding orbital is occupied by two electrons from oxygen, forming a dative or dipolar bond. This causes a C←O polarization of the molecule, with a small negative charge on carbon and a small positive charge on oxygen. The other two bonding orbitals are each occupied by one electron from carbon and one from oxygen, forming (polar) covalent bonds with a reverse C→O polarization since oxygen is more electronegative than carbon. In the free carbon monoxide molecule, a net negative charge δ– remains at the carbon end and the molecule has a small dipole moment of 0.122 D.
The molecule is therefore asymmetric: oxygen is more electron dense than carbon and is also slightly positively charged compared to carbon being negative.
Carbon monoxide has a computed fractional bond order of 2.6, indicating that the "third" bond is important but constitutes somewhat less than a full bond. Thus, in valence bond terms, –C≡O+ is the most important structure, while :C=O is non-octet, but has a neutral formal charge on each atom and represents the second most important resonance contributor. Because of the lone pair and divalence of carbon in this resonance structure, carbon monoxide is often considered to be an extraordinarily stabilized carbene. Isocyanides are compounds in which the O is replaced by an NR (R = alkyl or aryl) group and have a similar bonding scheme.
If carbon monoxide acts as a ligand, the polarity of the dipole may reverse with a net negative charge on the oxygen end, depending on the structure of the coordination complex.
See also the section "Coordination chemistry" below.
Bond polarity and oxidation state
Theoretical and experimental studies show that, despite the greater electronegativity of oxygen, the dipole moment points from the more-negative carbon end to the more-positive oxygen end. The three bonds are in fact polar covalent bonds that are strongly polarized. The calculated polarization toward the oxygen atom is 71% for the σ-bond and 77% for both π-bonds.
The oxidation state of carbon in carbon monoxide is +2 in each of these structures. It is calculated by counting all the bonding electrons as belonging to the more electronegative oxygen. Only the two non-bonding electrons on carbon are assigned to carbon. In this count, carbon then has only two valence electrons in the molecule compared to four in the free atom.
Occurrence
Carbon monoxide occurs in various natural and artificial environments. Photochemical degradation of plant matter for example generates an estimated 60 million tons/year. Typical concentrations in parts per million are as follows:
Atmospheric presence
Carbon monoxide (CO) is present in small amounts (about 80 ppb) in the Earth's atmosphere. Most of the rest comes from chemical reactions with organic compounds emitted by human activities and natural origins due to photochemical reactions in the troposphere that generate about 5 × 1012 kilograms per year. Other natural sources of CO include volcanoes, forest and bushfires, and other miscellaneous forms of combustion such as fossil fuels. Small amounts are also emitted from the ocean, and from geological activity because carbon monoxide occurs dissolved in molten volcanic rock at high pressures in the Earth's mantle. Because natural sources of carbon monoxide vary from year to year, it is difficult to accurately measure natural emissions of the gas.
Carbon monoxide has an indirect effect on radiative forcing by elevating concentrations of direct greenhouse gases, including methane and tropospheric ozone. CO can react chemically with other atmospheric constituents (primarily the hydroxyl radical, •OH) that would otherwise destroy methane. Through natural processes in the atmosphere, it is oxidized to carbon dioxide and ozone. Carbon monoxide is short-lived in the atmosphere (with an average lifetime of about one to two months), and spatially variable in concentration.
Due to its long lifetime in the mid-troposphere, carbon monoxide is also used as a tracer for pollutant plumes.
Astronomy
Beyond Earth, carbon monoxide is the second-most common diatomic molecule in the interstellar medium, after molecular hydrogen. Because of its asymmetry, this polar molecule produces far brighter spectral lines than the hydrogen molecule, making CO much easier to detect. Interstellar CO was first detected with radio telescopes in 1970. It is now the most commonly used tracer of molecular gas in general in the interstellar medium of galaxies, as molecular hydrogen can only be detected using ultraviolet light, which requires space telescopes. Carbon monoxide observations provide much of the information about the molecular clouds in which most stars form.
Beta Pictoris, the second brightest star in the constellation Pictor, shows an excess of infrared emission compared to normal stars of its type, which is caused by large quantities of dust and gas (including carbon monoxide) near the star.
In the atmosphere of Venus carbon monoxide occurs as a result of the photodissociation of carbon dioxide by electromagnetic radiation of wavelengths shorter than 169 nm. It has also been identified spectroscopically on the surface of Neptune's moon Triton.
Solid carbon monoxide is a component of comets. The volatile or "ice" component of Halley's Comet is about 15% CO. At room temperature and at atmospheric pressure, carbon monoxide is actually only metastable (see Boudouard reaction) and the same is true at low temperatures where CO and are solid, but nevertheless it can exist for billions of years in comets. There is very little CO in the atmosphere of Pluto, which seems to have been formed from comets. This may be because there is (or was) liquid water inside Pluto.
Carbon monoxide can react with water to form carbon dioxide and hydrogen:
CO + H2O → +
This is called the water-gas shift reaction when occurring in the gas phase, but it can also take place (very slowly) in an aqueous solution.
If the hydrogen partial pressure is high enough (for instance in an underground sea), formic acid will be formed:
CO + H2O → HCOOH
These reactions can take place in a few million years even at temperatures such as found on Pluto.
Pollution and health effects
Urban pollution
Carbon monoxide is a temporary atmospheric pollutant in some urban areas, chiefly from the exhaust of internal combustion engines (including vehicles, portable and back-up generators, lawnmowers, power washers, etc.), but also from incomplete combustion of various other fuels (including wood, coal, charcoal, oil, paraffin, propane, natural gas, and trash).
Large CO pollution events can be observed from space over cities.
Role in ground level ozone formation
Carbon monoxide is, along with aldehydes, part of the series of cycles of chemical reactions that form photochemical smog. It reacts with hydroxyl radical (•OH) to produce a radical intermediate •HOCO, which rapidly transfers its radical hydrogen to O2 to form peroxy radical (HO2•) and carbon dioxide (). Peroxy radical subsequently reacts with nitrogen oxide (NO) to form nitrogen dioxide (NO2) and hydroxyl radical. NO2 gives O(3P) via photolysis, thereby forming O3 following reaction with O2.
Since hydroxyl radical is formed during the formation of NO2, the balance of the sequence of chemical reactions starting with carbon monoxide and leading to the formation of ozone is:
CO + 2O2 + hν → + O3
(where hν refers to the photon of light absorbed by the NO2 molecule in the sequence)
Although the creation of NO2 is the critical step leading to low level ozone formation, it also increases this ozone in another, somewhat mutually exclusive way, by reducing the quantity of NO that is available to react with ozone.
Indoor air pollution
Carbon monoxide is one of the most acutely toxic indoor air contaminants. Carbon monoxide may be emitted from tobacco smoke and generated from malfunctioning fuel burning stoves (wood, kerosene, natural gas, propane) and fuel burning heating systems (wood, oil, natural gas) and from blocked flues connected to these appliances. In developed countries the main sources of indoor CO emission come from cooking and heating devices that burn fossil fuels and are faulty, incorrectly installed or poorly maintained. Appliance malfunction may be due to faulty installation or lack of maintenance and proper use. In low- and middle-income countries the most common sources of CO in homes are burning biomass fuels and cigarette smoke.
Mining
Miners refer to carbon monoxide as "whitedamp" or the "silent killer". It can be found in confined areas of poor ventilation in both surface mines and underground mines. The most common sources of carbon monoxide in mining operations are the internal combustion engine and explosives; however, in coal mines, carbon monoxide can also be found due to the low-temperature oxidation of coal. The idiom "Canary in the coal mine" pertained to an early warning of a carbon monoxide presence.
Health effects
Carbon monoxide poisoning is the most common type of fatal air poisoning in many countries. Acute exposure can also lead to long-term neurological effects such as cognitive and behavioural changes. Severe CO poisoning may lead to unconsciousness, coma and death. Chronic exposure to low concentrations of carbon monoxide may lead to lethargy, headaches, nausea, flu-like symptoms and neuropsychological and cardiovascular issues.
Chemistry
Carbon monoxide has a wide range of functions across all disciplines of chemistry. The four premier categories of reactivity involve metal-carbonyl catalysis, radical chemistry, cation and anion chemistries.
Coordination chemistry
Most metals form coordination complexes containing covalently attached carbon monoxide. These derivatives, which are called metal carbonyls, tend to be more robust when the metal is in lower oxidation states. For example iron pentacarbonyl (Fe(CO)5) is an air-stable, distillable liquid. Nickel carbonyl is an example of a metal carbonyl complex that forms by the direct combination of carbon monoxide with the metal:
Ni + 4 CO → Ni(CO)4 (1 bar, 55 °C)
These volatile complexes are often highly toxic. Some metal–CO complexes are prepared by decarbonylation of organic solvents, not from CO. For instance, iridium trichloride and triphenylphosphine react in boiling 2-methoxyethanol or DMF to afford IrCl(CO)(PPh3)2.
As a ligand, CO binds through carbon, forming a kind of triple bond. The lone pair on the carbon atom donates electron density to form a M-CO sigma bond. The two π* orbitals on CO bind to filled metal orbitals. The effect is related to the Dewar-Chatt-Duncanson model. The effects of the quasi-triple M-C bond is reflected in the infrared spectrum of these complexes. Whereas free CO vibrates at 2143 cm-1, its complexes tend to absorb near 1950 cm-1.
Organic and main group chemistry
In the presence of strong acids, alkenes react with carboxylic acids. Hydrolysis of this species (an acylium ion) gives the carboxylic acid, a net process known as the Koch–Haaf reaction. In the Gattermann–Koch reaction, arenes are converted to benzaldehyde derivatives in the presence of CO, AlCl3, and HCl.
A mixture of hydrogen gas and CO reacts with alkenes to give aldehydes. The process requires the presence of metal catalysts.
With main group reagents, CO undergoes several noteworthy reactions. Chlorination of CO is the industrial route to the important compound phosgene. With borane CO forms the adduct H3BCO, which is isoelectronic with the acylium cation [H3CCO]+. CO reacts with sodium to give products resulting from C−C coupling such as sodium acetylenediolate 2·. It reacts with molten potassium to give a mixture of an organometallic compound, potassium acetylenediolate 2·, potassium benzenehexolate 6, and potassium rhodizonate 2·.
The compounds cyclohexanehexone or triquinoyl (C6O6) and cyclopentanepentone or leuconic acid (C5O5), which so far have been obtained only in trace amounts, can be regarded as polymers of carbon monoxide. At pressures exceeding 5 GPa, carbon monoxide converts to polycarbonyl, a solid polymer that is metastable at atmospheric pressure but is explosive.
Laboratory preparation
Carbon monoxide is conveniently produced in the laboratory by the dehydration of formic acid or oxalic acid, for example with concentrated sulfuric acid. Another method is heating an intimate mixture of powdered zinc metal and calcium carbonate, which releases CO and leaves behind zinc oxide and calcium oxide:
Zn + CaCO3 → ZnO + CaO + CO
Silver nitrate and iodoform also afford carbon monoxide:
CHI3 + 3AgNO3 + H2O → 3HNO3 + CO + 3AgI
Finally, metal oxalate salts release CO upon heating, leaving a carbonate as byproduct:
→ +
Production
Thermal combustion is the most common source for carbon monoxide. Carbon monoxide is produced from the partial oxidation of carbon-containing compounds; it forms when there is not enough oxygen to produce carbon dioxide (), such as when operating a stove or an internal combustion engine in an enclosed space.
A large quantity of CO byproduct is formed during the oxidative processes for the production of chemicals. For this reason, the process off-gases have to be purified.
Many methods have been developed for carbon monoxide production.
Industrial production
A major industrial source of CO is producer gas, a mixture containing mostly carbon monoxide and nitrogen, formed by combustion of carbon in air at high temperature when there is an excess of carbon. In an oven, air is passed through a bed of coke. The initially produced equilibrates with the remaining hot carbon to give CO. The reaction of with carbon to give CO is described as the Boudouard reaction. Above 800 °C, CO is the predominant product:
(g) + C (s) → 2 CO (g) (ΔHr = 170 kJ/mol)
Another source is "water gas", a mixture of hydrogen and carbon monoxide produced via the endothermic reaction of steam and carbon:
H2O (g) + C (s) → H2 (g) + CO (g) (ΔHr = 131 kJ/mol)
Other similar "synthesis gases" can be obtained from natural gas and other fuels.
Carbon monoxide can also be produced by high-temperature electrolysis of carbon dioxide with solid oxide electrolyzer cells. One method developed at DTU Energy uses a cerium oxide catalyst and does not have any issues of fouling of the catalyst.
2 → 2 CO + O2
Carbon monoxide is also a byproduct of the reduction of metal oxide ores with carbon, shown in a simplified form as follows:
MO + C → M + CO
Carbon monoxide is also produced by the direct oxidation of carbon in a limited supply of oxygen or air.
2 C + O2 → 2 CO
Since CO is a gas, the reduction process can be driven by heating, exploiting the positive (favorable) entropy of reaction. The Ellingham diagram shows that CO formation is favored over in high temperatures.
Use
Chemical industry
Carbon monoxide is an industrial gas that has many applications in bulk chemicals manufacturing. Large quantities of aldehydes are produced by the hydroformylation reaction of alkenes, carbon monoxide, and H2. Hydroformylation is coupled to the Shell higher olefin process to give precursors to detergents.
Phosgene, useful for preparing isocyanates, polycarbonates, and polyurethanes, is produced by passing purified carbon monoxide and chlorine gas through a bed of porous activated carbon, which serves as a catalyst. World production of this compound was estimated to be 2.74 million tonnes in 1989.
CO + Cl2 → COCl2
Methanol is produced by the hydrogenation of carbon monoxide. In a related reaction, the hydrogenation of carbon monoxide is coupled to C−C bond formation, as in the Fischer–Tropsch process where carbon monoxide is hydrogenated to liquid hydrocarbon fuels. This technology allows coal or biomass to be converted to diesel.
In the Cativa process, carbon monoxide and methanol react in the presence of a homogeneous iridium catalyst and hydroiodic acid to give acetic acid. This process is responsible for most of the industrial production of acetic acid.
Metallurgy
Carbon monoxide is a strong reductive agent and has been used in pyrometallurgy to reduce metals from ores since ancient times. Carbon monoxide strips oxygen off metal oxides, reducing them to pure metal in high temperatures, forming carbon dioxide in the process. Carbon monoxide is not usually supplied as is, in the gaseous phase, in the reactor, but rather it is formed in high temperature in presence of oxygen-carrying ore, or a carboniferous agent such as coke, and high temperature. The blast furnace process is a typical example of a process of reduction of metal from ore with carbon monoxide.
Likewise, blast furnace gas collected at the top of blast furnace, still contains some 10% to 30% of carbon monoxide, and is used as fuel on Cowper stoves and on Siemens-Martin furnaces on open hearth steelmaking.
Proposed use as a rocket fuel
Carbon monoxide has been proposed for use as a fuel on Mars by NASA researcher Geoffrey Landis. Carbon monoxide/oxygen engines have been suggested for early surface transportation use as both carbon monoxide and oxygen can be straightforwardly produced from the carbon dioxide atmosphere of Mars by zirconia electrolysis, without using any Martian water resources to obtain hydrogen, which would be needed to make methane or any hydrogen-based fuel.
Landis also proposed manufacturing the fuel from the similar carbon dioxide atmosphere of Venus for a sample return mission, in combination with solar-powered UAVs and rocket balloon ascent.
Biological and physiological properties
Physiology
Carbon monoxide is a bioactive molecule which acts as a gaseous signaling molecule. It is naturally produced by many enzymatic and non-enzymatic pathways, the best understood of which is the catabolic action of heme oxygenase on the heme derived from hemoproteins such as hemoglobin. Following the first report that carbon monoxide is a normal neurotransmitter in 1993, carbon monoxide has received significant clinical attention as a biological regulator.
Because of carbon monoxide's role in the body, abnormalities in its metabolism have been linked to a variety of diseases, including neurodegenerations, hypertension, heart failure, and pathological inflammation. In many tissues, carbon monoxide acts as anti-inflammatory, vasodilatory, and encouragers of neovascular growth. In animal model studies, carbon monoxide reduced the severity of experimentally induced bacterial sepsis, pancreatitis, hepatic ischemia/reperfusion injury, colitis, osteoarthritis, lung injury, lung transplantation rejection, and neuropathic pain while promoting skin wound healing. Therefore, there is significant interest in the therapeutic potential of carbon monoxide becoming pharmaceutical agent and clinical standard of care.
Medicine
Studies involving carbon monoxide have been conducted in many laboratories throughout the world for its anti-inflammatory and cytoprotective properties. These properties have the potential to be used to prevent the development of a series of pathological conditions including ischemia reperfusion injury, transplant rejection, atherosclerosis, severe sepsis, severe malaria, or autoimmunity. Many pharmaceutical drug delivery initiatives have developed methods to safely administer carbon monoxide, and subsequent controlled clinical trials have evaluated the therapeutic effect of carbon monoxide.
Microbiology
Microbiota may also utilize carbon monoxide as a gasotransmitter. Carbon monoxide sensing is a signaling pathway facilitated by proteins such as CooA. The scope of the biological roles for carbon monoxide sensing is still unknown.
The human microbiome produces, consumes, and responds to carbon monoxide. For example, in certain bacteria, carbon monoxide is produced via the reduction of carbon dioxide by the enzyme carbon monoxide dehydrogenase with favorable bioenergetics to power downstream cellular operations. In another example, carbon monoxide is a nutrient for methanogenic archaea which reduce it to methane using hydrogen.
Carbon monoxide has certain antimicrobial properties which have been studied to treat against infectious diseases.
Food science
Carbon monoxide is used in modified atmosphere packaging systems in the US, mainly with fresh meat products such as beef, pork, and fish to keep them looking fresh. The benefit is two-fold, carbon monoxide protects against microbial spoilage and it enhances the meat color for consumer appeal. The carbon monoxide combines with myoglobin to form carboxymyoglobin, a bright-cherry-red pigment. Carboxymyoglobin is more stable than the oxygenated form of myoglobin, oxymyoglobin, which can become oxidized to the brown pigment metmyoglobin. This stable red color can persist much longer than in normally packaged meat. Typical levels of carbon monoxide used in the facilities that use this process are between 0.4% and 0.5%.
The technology was first given "generally recognized as safe" (GRAS) status by the U.S. Food and Drug Administration (FDA) in 2002 for use as a secondary packaging system, and does not require labeling. In 2004, the FDA approved CO as primary packaging method, declaring that CO does not mask spoilage odor. The process is currently unauthorized in many other countries, including Japan, Singapore, and the European Union.
Weaponization
In ancient history, Hannibal executed Roman prisoners with coal fumes during the Second Punic War.
Carbon monoxide had been used for genocide during the Holocaust at some extermination camps, the most notable by gas vans in Chełmno, and in the Action T4 "euthanasia" program.
History
Prehistory
Humans have maintained a complex relationship with carbon monoxide since first learning to control fire circa 800,000 BC. Early humans probably discovered the toxicity of carbon monoxide poisoning upon introducing fire into their dwellings. The early development of metallurgy and smelting technologies emerging circa 6,000 BC through the Bronze Age likewise plagued humankind from carbon monoxide exposure. Apart from the toxicity of carbon monoxide, indigenous Native Americans may have experienced the neuroactive properties of carbon monoxide through shamanistic fireside rituals.
Ancient history
Early civilizations developed mythological tales to explain the origin of fire, such as Prometheus from Greek mythology who shared fire with humans. Aristotle (384–322 BC) first recorded that burning coals produced toxic fumes. Greek physician Galen (129–199 AD) speculated that there was a change in the composition of the air that caused harm when inhaled, and many others of the era developed a basis of knowledge about carbon monoxide in the context of coal fume toxicity. Cleopatra may have died from carbon monoxide poisoning.
Pre–industrial revolution
Georg Ernst Stahl mentioned carbonarii halitus in 1697 in reference to toxic vapors thought to be carbon monoxide. Friedrich Hoffmann conducted the first modern scientific investigation into carbon monoxide poisoning from coal in 1716. Herman Boerhaave conducted the first scientific experiments on the effect of carbon monoxide (coal fumes) on animals in the 1730s.
Joseph Priestley is considered to have first synthesized carbon monoxide in 1772. Carl Wilhelm Scheele similarly isolated carbon monoxide from charcoal in 1773 and thought it could be the carbonic entity making fumes toxic. Torbern Bergman isolated carbon monoxide from oxalic acid in 1775. Later in 1776, the French chemist produced CO by heating zinc oxide with coke, but mistakenly concluded that the gaseous product was hydrogen, as it burned with a blue flame. In the presence of oxygen, including atmospheric concentrations, carbon monoxide burns with a blue flame, producing carbon dioxide. Antoine Lavoisier conducted similar inconclusive experiments to Lassone in 1777. The gas was identified as a compound containing carbon and oxygen by William Cruickshank in 1800.
Thomas Beddoes and James Watt recognized carbon monoxide (as hydrocarbonate) to brighten venous blood in 1793. Watt suggested coal fumes could act as an antidote to the oxygen in blood, and Beddoes and Watt likewise suggested hydrocarbonate has a greater affinity for animal fiber than oxygen in 1796. In 1854, Adrien Chenot similarly suggested carbon monoxide to remove the oxygen from blood and then be oxidized by the body to carbon dioxide. The mechanism for carbon monoxide poisoning is widely credited to Claude Bernard whose memoirs beginning in 1846 and published in 1857 phrased, "prevents arterials blood from becoming venous". Felix Hoppe-Seyler independently published similar conclusions in the following year.
Advent of industrial chemistry
Carbon monoxide gained recognition as an essential reagent in the 1900s. Three industrial processes illustrate its evolution in industry. In the Fischer–Tropsch process, coal and related carbon-rich feedstocks are converted into liquid fuels via the intermediacy of CO. Originally developed as part of the German war effort to compensate for their lack of domestic petroleum, this technology continues today. Also in Germany, a mixture of CO and hydrogen was found to combine with olefins to give aldehydes. This process, called hydroformylation, is used to produce many large scale chemicals such as surfactants as well as specialty compounds that are popular fragrances and drugs. For example, CO is used in the production of vitamin A. In a third major process, attributed to researchers at Monsanto, CO combines with methanol to give acetic acid. Most acetic acid is produced by the Cativa process. Hydroformylation and the acetic acid syntheses are two of myriad carbonylation processes.
See also
References
External links
Global map of carbon monoxide distribution
Explanation of the structure
International Chemical Safety Card 0023
CDC NIOSH Pocket Guide to Chemical Hazards: Carbon monoxide—National Institute for Occupational Safety and Health (NIOSH), US Centers for Disease Control and Prevention (CDC)
Carbon Monoxide—NIOSH Workplace Safety and Health Topic—CDC
Carbon Monoxide Poisoning—Frequently Asked Questions—CDC
External MSDS data sheet
Carbon Monoxide Detector Placement
Microscale Gas Chemistry Experiments with Carbon Monoxide
Airborne pollutants
Articles containing video clips
Gaseous signaling molecules
Industrial gases
Oxocarbons
Smog
Toxicology
Diatomic molecules | Carbon monoxide | [
"Physics",
"Chemistry",
"Environmental_science"
] | 6,347 | [
"Visibility",
"Toxicology",
"Physical quantities",
"Molecules",
"Smog",
"Signal transduction",
"Gaseous signaling molecules",
"Industrial gases",
"Chemical process engineering",
"Diatomic molecules",
"Matter"
] |
6,138 | https://en.wikipedia.org/wiki/Conjecture | In mathematics, a conjecture is a conclusion or a proposition that is proffered on a tentative basis without proof. Some conjectures, such as the Riemann hypothesis or Fermat's conjecture (now a theorem, proven in 1995 by Andrew Wiles), have shaped much of mathematical history as new areas of mathematics are developed in order to prove them.
Resolution of conjectures
Proof
Formal mathematics is based on provable truth. In mathematics, any number of cases supporting a universally quantified conjecture, no matter how large, is insufficient for establishing the conjecture's veracity, since a single counterexample could immediately bring down the conjecture. Mathematical journals sometimes publish the minor results of research teams having extended the search for a counterexample farther than previously done. For instance, the Collatz conjecture, which concerns whether or not certain sequences of integers terminate, has been tested for all integers up to 1.2 × 1012 (1.2 trillion). However, the failure to find a counterexample after extensive search does not constitute a proof that the conjecture is true—because the conjecture might be false but with a very large minimal counterexample.
Nevertheless, mathematicians often regard a conjecture as strongly supported by evidence even though not yet proved. That evidence may be of various kinds, such as verification of consequences of it or strong interconnections with known results.
A conjecture is considered proven only when it has been shown that it is logically impossible for it to be false. There are various methods of doing so; see methods of mathematical proof for more details.
One method of proof, applicable when there are only a finite number of cases that could lead to counterexamples, is known as "brute force": in this approach, all possible cases are considered and shown not to give counterexamples. In some occasions, the number of cases is quite large, in which case a brute-force proof may require as a practical matter the use of a computer algorithm to check all the cases. For example, the validity of the 1976 and 1997 brute-force proofs of the four color theorem by computer was initially doubted, but was eventually confirmed in 2005 by theorem-proving software.
When a conjecture has been proven, it is no longer a conjecture but a theorem. Many important theorems were once conjectures, such as the Geometrization theorem (which resolved the Poincaré conjecture), Fermat's Last Theorem, and others.
Disproof
Conjectures disproven through counterexample are sometimes referred to as false conjectures (cf. the Pólya conjecture and Euler's sum of powers conjecture). In the case of the latter, the first counterexample found for the n=4 case involved numbers in the millions, although it has been subsequently found that the minimal counterexample is actually smaller.
Independent conjectures
Not every conjecture ends up being proven true or false. The continuum hypothesis, which tries to ascertain the relative cardinality of certain infinite sets, was eventually shown to be independent from the generally accepted set of Zermelo–Fraenkel axioms of set theory. It is therefore possible to adopt this statement, or its negation, as a new axiom in a consistent manner (much as Euclid's parallel postulate can be taken either as true or false in an axiomatic system for geometry).
In this case, if a proof uses this statement, researchers will often look for a new proof that does not require the hypothesis (in the same way that it is desirable that statements in Euclidean geometry be proved using only the axioms of neutral geometry, i.e. without the parallel postulate). The one major exception to this in practice is the axiom of choice, as the majority of researchers usually do not worry whether a result requires it—unless they are studying this axiom in particular.
Conditional proofs
Sometimes, a conjecture is called a hypothesis when it is used frequently and repeatedly as an assumption in proofs of other results. For example, the Riemann hypothesis is a conjecture from number theory that — amongst other things — makes predictions about the distribution of prime numbers. Few number theorists doubt that the Riemann hypothesis is true. In fact, in anticipation of its eventual proof, some have even proceeded to develop further proofs which are contingent on the truth of this conjecture. These are called conditional proofs: the conjectures assumed appear in the hypotheses of the theorem, for the time being.
These "proofs", however, would fall apart if it turned out that the hypothesis was false, so there is considerable interest in verifying the truth or falsity of conjectures of this type.
Important examples
Fermat's Last Theorem
In number theory, Fermat's Last Theorem (sometimes called Fermat's conjecture, especially in older texts) states that no three positive integers , , and can satisfy the equation for any integer value of greater than two.
This theorem was first conjectured by Pierre de Fermat in 1637 in the margin of a copy of Arithmetica, where he claimed that he had a proof that was too large to fit in the margin. The first successful proof was released in 1994 by Andrew Wiles, and formally published in 1995, after 358 years of effort by mathematicians. The unsolved problem stimulated the development of algebraic number theory in the 19th century, and the proof of the modularity theorem in the 20th century. It is among the most notable theorems in the history of mathematics, and prior to its proof it was in the Guinness Book of World Records for "most difficult mathematical problems".
Four color theorem
In mathematics, the four color theorem, or the four color map theorem, states that given any separation of a plane into contiguous regions, producing a figure called a map, no more than four colors are required to color the regions of the map—so that no two adjacent regions have the same color. Two regions are called adjacent if they share a common boundary that is not a corner, where corners are the points shared by three or more regions. For example, in the map of the United States of America, Utah and Arizona are adjacent, but Utah and New Mexico, which only share a point that also belongs to Arizona and Colorado, are not.
Möbius mentioned the problem in his lectures as early as 1840. The conjecture was first proposed on October 23, 1852 when Francis Guthrie, while trying to color the map of counties of England, noticed that only four different colors were needed. The five color theorem, which has a short elementary proof, states that five colors suffice to color a map and was proven in the late 19th century; however, proving that four colors suffice turned out to be significantly harder. A number of false proofs and false counterexamples have appeared since the first statement of the four color theorem in 1852.
The four color theorem was ultimately proven in 1976 by Kenneth Appel and Wolfgang Haken. It was the first major theorem to be proved using a computer. Appel and Haken's approach started by showing that there is a particular set of 1,936 maps, each of which cannot be part of a smallest-sized counterexample to the four color theorem (i.e., if they did appear, one could make a smaller counter-example). Appel and Haken used a special-purpose computer program to confirm that each of these maps had this property. Additionally, any map that could potentially be a counterexample must have a portion that looks like one of these 1,936 maps. Showing this with hundreds of pages of hand analysis, Appel and Haken concluded that no smallest counterexample exists because any must contain, yet do not contain, one of these 1,936 maps. This contradiction means there are no counterexamples at all and that the theorem is therefore true. Initially, their proof was not accepted by mathematicians at all because the computer-assisted proof was infeasible for a human to check by hand. However, the proof has since then gained wider acceptance, although doubts still remain.
Hauptvermutung
The Hauptvermutung (German for main conjecture) of geometric topology is the conjecture that any two triangulations of a triangulable space have a common refinement, a single triangulation that is a subdivision of both of them. It was originally formulated in 1908, by Steinitz and Tietze.
This conjecture is now known to be false. The non-manifold version was disproved by John Milnor in 1961 using Reidemeister torsion.
The manifold version is true in dimensions . The cases were proved by Tibor Radó and Edwin E. Moise in the 1920s and 1950s, respectively.
Weil conjectures
In mathematics, the Weil conjectures were some highly influential proposals by on the generating functions (known as local zeta-functions) derived from counting the number of points on algebraic varieties over finite fields.
A variety V over a finite field with q elements has a finite number of rational points, as well as points over every finite field with qk elements containing that field. The generating function has coefficients derived from the numbers Nk of points over the (essentially unique) field with qk elements.
Weil conjectured that such zeta-functions should be rational functions, should satisfy a form of functional equation, and should have their zeroes in restricted places. The last two parts were quite consciously modeled on the Riemann zeta function and Riemann hypothesis. The rationality was proved by , the functional equation by , and the analogue of the Riemann hypothesis was proved by .
Poincaré conjecture
In mathematics, the Poincaré conjecture is a theorem about the characterization of the 3-sphere, which is the hypersphere that bounds the unit ball in four-dimensional space. The conjecture states that: An equivalent form of the conjecture involves a coarser form of equivalence than homeomorphism called homotopy equivalence: if a 3-manifold is homotopy equivalent to the 3-sphere, then it is necessarily homeomorphic to it.
Originally conjectured by Henri Poincaré in 1904, the theorem concerns a space that locally looks like ordinary three-dimensional space but is connected, finite in size, and lacks any boundary (a closed 3-manifold). The Poincaré conjecture claims that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere. An analogous result has been known in higher dimensions for some time.
After nearly a century of effort by mathematicians, Grigori Perelman presented a proof of the conjecture in three papers made available in 2002 and 2003 on arXiv. The proof followed on from the program of Richard S. Hamilton to use the Ricci flow to attempt to solve the problem. Hamilton later introduced a modification of the standard Ricci flow, called Ricci flow with surgery to systematically excise singular regions as they develop, in a controlled way, but was unable to prove this method "converged" in three dimensions. Perelman completed this portion of the proof. Several teams of mathematicians have verified that Perelman's proof is correct.
The Poincaré conjecture, before being proven, was one of the most important open questions in topology.
Riemann hypothesis
In mathematics, the Riemann hypothesis, proposed by , is a conjecture that the non-trivial zeros of the Riemann zeta function all have real part 1/2. The name is also used for some closely related analogues, such as the Riemann hypothesis for curves over finite fields.
The Riemann hypothesis implies results about the distribution of prime numbers. Along with suitable generalizations, some mathematicians consider it the most important unresolved problem in pure mathematics. The Riemann hypothesis, along with the Goldbach conjecture, is part of Hilbert's eighth problem in David Hilbert's list of 23 unsolved problems; it is also one of the Clay Mathematics Institute Millennium Prize Problems.
P versus NP problem
The P versus NP problem is a major unsolved problem in computer science. Informally, it asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer; it is widely conjectured that the answer is no. It was essentially first mentioned in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether a certain NP-complete problem could be solved in quadratic or linear time. The precise statement of the P=NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" and is considered by many to be the most important open problem in the field. It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute to carry a US$1,000,000 prize for the first correct solution.
Other conjectures
Goldbach's conjecture
The twin prime conjecture
The Collatz conjecture
The Manin conjecture
The Maldacena conjecture
The Euler conjecture, proposed by Euler in the 18th century but for which counterexamples for a number of exponents (starting with n=4) were found beginning in the mid 20th century
The Hardy-Littlewood conjectures are a pair of conjectures concerning the distribution of prime numbers, the first of which expands upon the aforementioned twin prime conjecture. Neither one has either been proven or disproven, but it has been proven that both cannot simultaneously be true (i.e., at least one must be false). It has not been proven which one is false, but it is widely believed that the first conjecture is true and the second one is false.
The Langlands program is a far-reaching web of these ideas of 'unifying conjectures' that link different subfields of mathematics (e.g. between number theory and representation theory of Lie groups). Some of these conjectures have since been proved.
In other sciences
Karl Popper pioneered the use of the term "conjecture" in scientific philosophy. Conjecture is related to hypothesis, which in science refers to a testable conjecture.
See also
Bold hypothesis
Futures studies
Hypotheticals
List of conjectures
Ramanujan machine
References
Works cited
External links
Open Problem Garden
Unsolved Problems web site
Concepts in the philosophy of science
Statements
Mathematical terminology | Conjecture | [
"Mathematics"
] | 2,932 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Conjectures",
"nan"
] |
6,172 | https://en.wikipedia.org/wiki/Cantor%20set | In mathematics, the Cantor set is a set of points lying on a single line segment that has a number of unintuitive properties. It was discovered in 1874 by Henry John Stephen Smith and mentioned by German mathematician Georg Cantor in 1883.
Through consideration of this set, Cantor and others helped lay the foundations of modern point-set topology. The most common construction is the Cantor ternary set, built by removing the middle third of a line segment and then repeating the process with the remaining shorter segments. Cantor mentioned this ternary construction only in passing, as an example of a perfect set that is nowhere dense (, Anmerkungen zu §10, /p. 590).
More generally, in topology, a Cantor space is a topological space homeomorphic to the Cantor ternary set (equipped with its subspace topology). The Cantor set is naturally homeomorphic to the countable product of the discrete two point space . By a theorem of L. E. J. Brouwer, this is equivalent to being perfect, nonempty, compact, metrizable and zero dimensional.
Construction and formula of the ternary set
The Cantor ternary set is created by iteratively deleting the open middle third from a set of line segments. One starts by deleting the open middle third from the interval , leaving two line segments: . Next, the open middle third of each of these remaining segments is deleted, leaving four line segments: .
The Cantor ternary set contains all points in the interval that are not deleted at any step in this infinite process. The same facts can be described recursively by setting
and
for , so that
for any .
The first six steps of this process are illustrated below.
Using the idea of self-similar transformations, and the explicit closed formulas for the Cantor set are
where every middle third is removed as the open interval from the closed interval surrounding it, or
where the middle third of the foregoing closed interval is removed by intersecting with
This process of removing middle thirds is a simple example of a finite subdivision rule. The complement of the Cantor ternary set is an example of a fractal string.
In arithmetical terms, the Cantor set consists of all real numbers of the unit interval that do not require the digit 1 in order to be expressed as a ternary (base 3) fraction. As the above diagram illustrates, each point in the Cantor set is uniquely located by a path through an infinitely deep binary tree, where the path turns left or right at each level according to which side of a deleted segment the point lies on. Representing each left turn with 0 and each right turn with 2 yields the ternary fraction for a point.
Mandelbrot's construction by "curdling"
In The Fractal Geometry of Nature, mathematician Benoit Mandelbrot provides a whimsical thought experiment to assist non-mathematical readers in imagining the construction of . His narrative begins with imagining a bar, perhaps of lightweight metal, in which the bar's matter "curdles" by iteratively shifting towards its extremities. As the bar's segments become smaller, they become thin, dense slugs that eventually grow too small and faint to see.CURDLING: The construction of the Cantor bar results from the process I call curdling. It begins with a round bar. It is best to think of it as having a very low density. Then matter "curdles" out of this bar's middle third into the end thirds, so that the positions of the latter remain unchanged. Next matter curdles out of the middle third of each end third into its end thirds, and so on ad infinitum until one is left with an infinitely large number of infinitely thin slugs of infinitely high density. These slugs are spaced along the line in the very specific fashion induced by the generating process. In this illustration, curdling (which eventually requires hammering!) stops when both the printer's press and our eye cease to follow; the last line is indistinguishable from the last but one: each of its ultimate parts is seen as a gray slug rather than two parallel black slugs.
Composition
Since the Cantor set is defined as the set of points not excluded, the proportion (i.e., measure) of the unit interval remaining can be found by total length removed. This total is the geometric progression
So that the proportion left is 1 − 1 = 0.
This calculation suggests that the Cantor set cannot contain any interval of non-zero length. It may seem surprising that there should be anything left—after all, the sum of the lengths of the removed intervals is equal to the length of the original interval. However, a closer look at the process reveals that there must be something left, since removing the "middle third" of each interval involved removing open sets (sets that do not include their endpoints). So removing the line segment (, ) from the original interval [0, 1] leaves behind the points and . Subsequent steps do not remove these (or other) endpoints, since the intervals removed are always internal to the intervals remaining. So the Cantor set is not empty, and in fact contains an uncountably infinite number of points (as follows from the above description in terms of paths in an infinite binary tree).
It may appear that only the endpoints of the construction segments are left, but that is not the case either. The number , for example, has the unique ternary form 0.020202... = . It is in the bottom third, and the top third of that third, and the bottom third of that top third, and so on. Since it is never in one of the middle segments, it is never removed. Yet it is also not an endpoint of any middle segment, because it is not a multiple of any power of 1/3.
All endpoints of segments are terminating ternary fractions and are contained in the set
which is a countably infinite set.
As to cardinality, almost all elements of the Cantor set are not endpoints of intervals, nor rational points like 1/4. The whole Cantor set is in fact not countable.
Properties
Cardinality
It can be shown that there are as many points left behind in this process as there were to begin with, and that therefore, the Cantor set is uncountable. To see this, we show that there is a function f from the Cantor set to the closed interval [0,1] that is surjective (i.e. f maps from onto [0,1]) so that the cardinality of is no less than that of [0,1]. Since is a subset of [0,1], its cardinality is also no greater, so the two cardinalities must in fact be equal, by the Cantor–Bernstein–Schröder theorem.
To construct this function, consider the points in the [0, 1] interval in terms of base 3 (or ternary) notation. Recall that the proper ternary fractions, more precisely: the elements of , admit more than one representation in this notation, as for example , that can be written as 0.13 = 3, but also as 0.0222...3 = 3, and , that can be written as 0.23 = 3 but also as 0.1222...3 = 3.
When we remove the middle third, this contains the numbers with ternary numerals of the form 0.1xxxxx...3 where xxxxx...3 is strictly between 00000...3 and 22222...3. So the numbers remaining after the first step consist of
Numbers of the form 0.0xxxxx...3 (including 0.022222...3 = 1/3)
Numbers of the form 0.2xxxxx...3 (including 0.222222...3 = 1)
This can be summarized by saying that those numbers with a ternary representation such that the first digit after the radix point is not 1 are the ones remaining after the first step.
The second step removes numbers of the form 0.01xxxx...3 and 0.21xxxx...3, and (with appropriate care for the endpoints) it can be concluded that the remaining numbers are those with a ternary numeral where neither of the first two digits is 1.
Continuing in this way, for a number not to be excluded at step n, it must have a ternary representation whose nth digit is not 1. For a number to be in the Cantor set, it must not be excluded at any step, it must admit a numeral representation consisting entirely of 0s and 2s.
It is worth emphasizing that numbers like 1, = 0.13 and = 0.213 are in the Cantor set, as they have ternary numerals consisting entirely of 0s and 2s: 1 = 0.222...3 = 3, = 0.0222...3 = 3 and = 0.20222...3 = 3.
All the latter numbers are "endpoints", and these examples are right limit points of . The same is true for the left limit points of , e.g. = 0.1222...3 = 3 = 3 and = 0.21222...3 = 3 = 3. All these endpoints are proper ternary fractions (elements of ) of the form , where denominator q is a power of 3 when the fraction is in its irreducible form. The ternary representation of these fractions terminates (i.e., is finite) or — recall from above that proper ternary fractions each have 2 representations — is infinite and "ends" in either infinitely many recurring 0s or infinitely many recurring 2s. Such a fraction is a left limit point of if its ternary representation contains no 1's and "ends" in infinitely many recurring 0s. Similarly, a proper ternary fraction is a right limit point of if it again its ternary expansion contains no 1's and "ends" in infinitely many recurring 2s.
This set of endpoints is dense in (but not dense in [0, 1]) and makes up a countably infinite set. The numbers in which are not endpoints also have only 0s and 2s in their ternary representation, but they cannot end in an infinite repetition of the digit 0, nor of the digit 2, because then it would be an endpoint.
The function from to [0,1] is defined by taking the ternary numerals that do consist entirely of 0s and 2s, replacing all the 2s by 1s, and interpreting the sequence as a binary representation of a real number. In a formula,
where
For any number y in [0,1], its binary representation can be translated into a ternary representation of a number x in by replacing all the 1s by 2s. With this, f(x) = y so that y is in the range of f. For instance if y = = 0.100110011001...2 = , we write x = = 0.200220022002...3 = . Consequently, f is surjective. However, f is not injective — the values for which f(x) coincides are those at opposing ends of one of the middle thirds removed. For instance, take
= 3 (which is a right limit point of and a left limit point of the middle third [, ]) and
= 3 (which is a left limit point of and a right limit point of the middle third [, ])
so
Thus there are as many points in the Cantor set as there are in the interval [0, 1] (which has the uncountable cardinality However, the set of endpoints of the removed intervals is countable, so there must be uncountably many numbers in the Cantor set which are not interval endpoints. As noted above, one example of such a number is , which can be written as 0.020202...3 = in ternary notation. In fact, given any , there exist such that . This was first demonstrated by Steinhaus in 1917, who proved, via a geometric argument, the equivalent assertion that for every . Since this construction provides an injection from to , we have as an immediate corollary. Assuming that for any infinite set (a statement shown to be equivalent to the axiom of choice by Tarski), this provides another demonstration that .
The Cantor set contains as many points as the interval from which it is taken, yet itself contains no interval of nonzero length. The irrational numbers have the same property, but the Cantor set has the additional property of being closed, so it is not even dense in any interval, unlike the irrational numbers which are dense in every interval.
It has been conjectured that all algebraic irrational numbers are normal. Since members of the Cantor set are not normal in base 3, this would imply that all members of the Cantor set are either rational or transcendental.
Self-similarity
The Cantor set is the prototype of a fractal. It is self-similar, because it is equal to two copies of itself, if each copy is shrunk by a factor of 3 and translated. More precisely, the Cantor set is equal to the union of two functions, the left and right self-similarity transformations of itself, and , which leave the Cantor set invariant up to homeomorphism:
Repeated iteration of and can be visualized as an infinite binary tree. That is, at each node of the tree, one may consider the subtree to the left or to the right. Taking the set together with function composition forms a monoid, the dyadic monoid.
The automorphisms of the binary tree are its hyperbolic rotations, and are given by the modular group. Thus, the Cantor set is a homogeneous space in the sense that for any two points and in the Cantor set , there exists a homeomorphism with . An explicit construction of can be described more easily if we see the Cantor set as a product space of countably many copies of the discrete space . Then the map defined by is an involutive homeomorphism exchanging and .
Conservation law
It has been found that some form of conservation law is always responsible behind scaling and self-similarity. In the case of Cantor set it can be seen that the th moment (where is the fractal dimension) of all the surviving intervals at any stage of the construction process is equal to a constant which is one in the case of the Cantor set.
We know that there are intervals of size present in the system at the th step of its construction. Then if we label the surviving intervals as then the th moment is since .
The Hausdorff dimension of the Cantor set is equal to ln(2)/ln(3) ≈ 0.631.
Topological and analytical properties
Although "the" Cantor set typically refers to the original, middle-thirds Cantor set described above, topologists often talk about "a" Cantor set, which means any topological space that is homeomorphic (topologically equivalent) to it.
As the above summation argument shows, the Cantor set is uncountable but has Lebesgue measure 0. Since the Cantor set is the complement of a union of open sets, it itself is a closed subset of the reals, and therefore a complete metric space. Since it is also totally bounded, the Heine–Borel theorem says that it must be compact.
For any point in the Cantor set and any arbitrarily small neighborhood of the point, there is some other number with a ternary numeral of only 0s and 2s, as well as numbers whose ternary numerals contain 1s. Hence, every point in the Cantor set is an accumulation point (also called a cluster point or limit point) of the Cantor set, but none is an interior point. A closed set in which every point is an accumulation point is also called a perfect set in topology, while a closed subset of the interval with no interior points is nowhere dense in the interval.
Every point of the Cantor set is also an accumulation point of the complement of the Cantor set.
For any two points in the Cantor set, there will be some ternary digit where they differ — one will have 0 and the other 2. By splitting the Cantor set into "halves" depending on the value of this digit, one obtains a partition of the Cantor set into two closed sets that separate the original two points. In the relative topology on the Cantor set, the points have been separated by a clopen set. Consequently, the Cantor set is totally disconnected. As a compact totally disconnected Hausdorff space, the Cantor set is an example of a Stone space.
As a topological space, the Cantor set is naturally homeomorphic to the product of countably many copies of the space , where each copy carries the discrete topology. This is the space of all sequences in two digits
which can also be identified with the set of 2-adic integers. The basis for the open sets of the product topology are cylinder sets; the homeomorphism maps these to the subspace topology that the Cantor set inherits from the natural topology on the real line. This characterization of the Cantor space as a product of compact spaces gives a second proof that Cantor space is compact, via Tychonoff's theorem.
From the above characterization, the Cantor set is homeomorphic to the p-adic integers, and, if one point is removed from it, to the p-adic numbers.
The Cantor set is a subset of the reals, which are a metric space with respect to the ordinary distance metric; therefore the Cantor set itself is a metric space, by using that same metric. Alternatively, one can use the p-adic metric on : given two sequences , the distance between them is , where is the smallest index such that ; if there is no such index, then the two sequences are the same, and one defines the distance to be zero. These two metrics generate the same topology on the Cantor set.
We have seen above that the Cantor set is a totally disconnected perfect compact metric space. Indeed, in a sense it is the only one: every nonempty totally disconnected perfect compact metric space is homeomorphic to the Cantor set. See Cantor space for more on spaces homeomorphic to the Cantor set.
The Cantor set is sometimes regarded as "universal" in the category of compact metric spaces, since any compact metric space is a continuous image of the Cantor set; however this construction is not unique and so the Cantor set is not universal in the precise categorical sense. The "universal" property has important applications in functional analysis, where it is sometimes known as the representation theorem for compact metric spaces.
For any integer q ≥ 2, the topology on the group G = Zqω (the countable direct sum) is discrete. Although the Pontrjagin dual Γ is also Zqω, the topology of Γ is compact. One can see that Γ is totally disconnected and perfect - thus it is homeomorphic to the Cantor set. It is easiest to write out the homeomorphism explicitly in the case q = 2. (See Rudin 1962 p 40.)
Measure and probability
The Cantor set can be seen as the compact group of binary sequences, and as such, it is endowed with a natural Haar measure. When normalized so that the measure of the set is 1, it is a model of an infinite sequence of coin tosses. Furthermore, one can show that the usual Lebesgue measure on the interval is an image of the Haar measure on the Cantor set, while the natural injection into the ternary set is a canonical example of a singular measure. It can also be shown that the Haar measure is an image of any probability, making the Cantor set a universal probability space in some ways.
In Lebesgue measure theory, the Cantor set is an example of a set which is uncountable and has zero measure. In contrast, the set has a Hausdorff measure of 1 in its dimension of log 2 / log 3.
Cantor numbers
If we define a Cantor number as a member of the Cantor set, then
Every real number in [0, 2] is the sum of two Cantor numbers.
Between any two Cantor numbers there is a number that is not a Cantor number.
Descriptive set theory
The Cantor set is a meagre set (or a set of first category) as a subset of [0,1] (although not as a subset of itself, since it is a Baire space). The Cantor set thus demonstrates that notions of "size" in terms of cardinality, measure, and (Baire) category need not coincide. Like the set , the Cantor set is "small" in the sense that it is a null set (a set of measure zero) and it is a meagre subset of [0,1]. However, unlike , which is countable and has a "small" cardinality, , the cardinality of is the same as that of [0,1], the continuum , and is "large" in the sense of cardinality. In fact, it is also possible to construct a subset of [0,1] that is meagre but of positive measure and a subset that is non-meagre but of measure zero: By taking the countable union of "fat" Cantor sets of measure (see Smith–Volterra–Cantor set below for the construction), we obtain a set which has a positive measure (equal to 1) but is meagre in [0,1], since each is nowhere dense. Then consider the set . Since , cannot be meagre, but since , must have measure zero.
Variants
Smith–Volterra–Cantor set
Instead of repeatedly removing the middle third of every piece as in the Cantor set, we could also keep removing any other fixed percentage (other than 0% and 100%) from the middle. In the case where the middle of the interval is removed, we get a remarkably accessible case — the set consists of all numbers in [0,1] that can be written as a decimal consisting entirely of 0s and 9s. If a fixed percentage is removed at each stage, then the limiting set will have measure zero, since the length of the remainder as for any such that .
On the other hand, "fat Cantor sets" of positive measure can be generated by removal of smaller fractions of the middle of the segment in each iteration. Thus, one can construct sets homeomorphic to the Cantor set that have positive Lebesgue measure while still being nowhere dense. If an interval of length () is removed from the middle of each segment at the nth iteration, then the total length removed is , and the limiting set will have a Lebesgue measure of . Thus, in a sense, the middle-thirds Cantor set is a limiting case with . If , then the remainder will have positive measure with . The case is known as the Smith–Volterra–Cantor set, which has a Lebesgue measure of .
Stochastic Cantor set
One can modify the construction of the Cantor set by dividing randomly instead of equally. Besides, to incorporate time we can divide only one of the available intervals at each step instead of dividing all the available intervals. In the case of stochastic triadic Cantor set the resulting process can be described by the following rate equation
and for the stochastic dyadic Cantor set
where is the number of intervals of size between and . In the case of triadic Cantor set the fractal dimension is which is
less than its deterministic counterpart . In the case of stochastic dyadic Cantor set
the fractal dimension is which is again less than that of its deterministic counterpart . In the case of stochastic dyadic Cantor set the solution for exhibits dynamic scaling as its solution in the long-time limit is where the fractal dimension of the stochastic dyadic Cantor set . In either case, like triadic Cantor set, the th moment () of stochastic triadic and dyadic Cantor set too are conserved quantities.
Cantor dust
Cantor dust is a multi-dimensional version of the Cantor set. It can be formed by taking a finite Cartesian product of the Cantor set with itself, making it a Cantor space. Like the Cantor set, Cantor dust has zero measure.
A different 2D analogue of the Cantor set is the Sierpinski carpet, where a square is divided up into nine smaller squares, and the middle one removed. The remaining squares are then further divided into nine each and the middle removed, and so on ad infinitum. One 3D analogue of this is the Menger sponge.
Historical remarks
Cantor introduced what we call today the Cantor ternary set as an example "of a perfect point-set, which is not everywhere-dense in any interval, however small." Cantor described in terms of ternary expansions, as "the set of all real numbers given by the formula: where the coefficients arbitrarily take the two values 0 and 2, and the series can consist of a finite number or an infinite number of elements."
A topological space is perfect if all its points are limit points or, equivalently, if it coincides with its derived set . Subsets of the real line, like , can be seen as topological spaces under the induced subspace topology.
Cantor was led to the study of derived sets by his results on uniqueness of trigonometric series. The latter did much to set him on the course for developing an abstract, general theory of infinite sets.
Benoit Mandelbrot wrote much on Cantor dusts and their relation to natural fractals and statistical physics. He further reflected on the puzzling or even upsetting nature of such structures to those in the mathematics and physics community. In The Fractal geometry of Nature, he described how "When I started on this topic in 1962, everyone was agreeing that Cantor dusts are at least as monstrous as the Koch and Peano curves," and added that "every self-respecting physicist was automatically turned off by a mention of Cantor, ready to run a mile from anyone claiming to be interesting in science."
See also
The indicator function of the Cantor set
Smith–Volterra–Cantor set
Cantor function
Cantor cube
Antoine's necklace
Koch snowflake
Knaster–Kuratowski fan
List of fractals by Hausdorff dimension
Moser–de Bruijn sequence
Notes
References
.
External links
Cantor Sets and Cantor Set and Function at cut-the-knot
Cantor Set at Platonic Realms
Measure theory
Topological spaces
Sets of real numbers
Georg Cantor
L-systems | Cantor set | [
"Mathematics"
] | 5,540 | [
"Topological spaces",
"Mathematical structures",
"Topology",
"Space (mathematics)"
] |
6,173 | https://en.wikipedia.org/wiki/Cardinal%20number | In mathematics, a cardinal number, or cardinal for short, is what is commonly called the number of elements of a set. In the case of a finite set, its cardinal number, or cardinality is therefore a natural number. For dealing with the case of infinite sets, the infinite cardinal numbers have been introduced, which are often denoted with the Hebrew letter (aleph) marked with subscript indicating their rank among the infinite cardinals.
Cardinality is defined in terms of bijective functions. Two sets have the same cardinality if, and only if, there is a one-to-one correspondence (bijection) between the elements of the two sets. In the case of finite sets, this agrees with the intuitive notion of number of elements. In the case of infinite sets, the behavior is more complex. A fundamental theorem due to Georg Cantor shows that it is possible for infinite sets to have different cardinalities, and in particular the cardinality of the set of real numbers is greater than the cardinality of the set of natural numbers. It is also possible for a proper subset of an infinite set to have the same cardinality as the original set—something that cannot happen with proper subsets of finite sets.
There is a transfinite sequence of cardinal numbers:
This sequence starts with the natural numbers including zero (finite cardinals), which are followed by the aleph numbers. The aleph numbers are indexed by ordinal numbers. If the axiom of choice is true, this transfinite sequence includes every cardinal number. If the axiom of choice is not true (see ), there are infinite cardinals that are not aleph numbers.
Cardinality is studied for its own sake as part of set theory. It is also a tool used in branches of mathematics including model theory, combinatorics, abstract algebra and mathematical analysis. In category theory, the cardinal numbers form a skeleton of the category of sets.
History
The notion of cardinality, as now understood, was formulated by Georg Cantor, the originator of set theory, in 1874–1884. Cardinality can be used to compare an aspect of finite sets. For example, the sets {1,2,3} and {4,5,6} are not equal, but have the same cardinality, namely three. This is established by the existence of a bijection (i.e., a one-to-one correspondence) between the two sets, such as the correspondence {1→4, 2→5, 3→6}.
Cantor applied his concept of bijection to infinite sets (for example the set of natural numbers N = {0, 1, 2, 3, ...}). Thus, he called all sets having a bijection with N denumerable (countably infinite) sets, which all share the same cardinal number. This cardinal number is called , aleph-null. He called the cardinal numbers of infinite sets transfinite cardinal numbers.
Cantor proved that any unbounded subset of N has the same cardinality as N, even though this might appear to run contrary to intuition. He also proved that the set of all ordered pairs of natural numbers is denumerable; this implies that the set of all rational numbers is also denumerable, since every rational can be represented by a pair of integers. He later proved that the set of all real algebraic numbers is also denumerable. Each real algebraic number z may be encoded as a finite sequence of integers, which are the coefficients in the polynomial equation of which it is a solution, i.e. the ordered n-tuple (a0, a1, ..., an), ai ∈ Z together with a pair of rationals (b0, b1) such that z is the unique root of the polynomial with coefficients (a0, a1, ..., an) that lies in the interval (b0, b1).
In his 1874 paper "On a Property of the Collection of All Real Algebraic Numbers", Cantor proved that there exist higher-order cardinal numbers, by showing that the set of real numbers has cardinality greater than that of N. His proof used an argument with nested intervals, but in an 1891 paper, he proved the same result using his ingenious and much simpler diagonal argument. The new cardinal number of the set of real numbers is called the cardinality of the continuum and Cantor used the symbol for it.
Cantor also developed a large portion of the general theory of cardinal numbers; he proved that there is a smallest transfinite cardinal number (, aleph-null), and that for every cardinal number there is a next-larger cardinal
His continuum hypothesis is the proposition that the cardinality of the set of real numbers is the same as . This hypothesis is independent of the standard axioms of mathematical set theory, that is, it can neither be proved nor disproved from them. This was shown in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940.
Motivation
In informal use, a cardinal number is what is normally referred to as a counting number, provided that 0 is included: 0, 1, 2, .... They may be identified with the natural numbers beginning with 0. The counting numbers are exactly what can be defined formally as the finite cardinal numbers. Infinite cardinals only occur in higher-level mathematics and logic.
More formally, a non-zero number can be used for two purposes: to describe the size of a set, or to describe the position of an element in a sequence. For finite sets and sequences it is easy to see that these two notions coincide, since for every number describing a position in a sequence we can construct a set that has exactly the right size. For example, 3 describes the position of 'c' in the sequence <'a','b','c','d',...>, and we can construct the set {a,b,c}, which has 3 elements.
However, when dealing with infinite sets, it is essential to distinguish between the two, since the two notions are in fact different for infinite sets. Considering the position aspect leads to ordinal numbers, while the size aspect is generalized by the cardinal numbers described here.
The intuition behind the formal definition of cardinal is the construction of a notion of the relative size or "bigness" of a set, without reference to the kind of members which it has. For finite sets this is easy; one simply counts the number of elements a set has. In order to compare the sizes of larger sets, it is necessary to appeal to more refined notions.
A set Y is at least as big as a set X if there is an injective mapping from the elements of X to the elements of Y. An injective mapping identifies each element of the set X with a unique element of the set Y. This is most easily understood by an example; suppose we have the sets X = {1,2,3} and Y = {a,b,c,d}, then using this notion of size, we would observe that there is a mapping:
1 → a
2 → b
3 → c
which is injective, and hence conclude that Y has cardinality greater than or equal to X. The element d has no element mapping to it, but this is permitted as we only require an injective mapping, and not necessarily a bijective mapping. The advantage of this notion is that it can be extended to infinite sets.
We can then extend this to an equality-style relation. Two sets X and Y are said to have the same cardinality if there exists a bijection between X and Y. By the Schroeder–Bernstein theorem, this is equivalent to there being both an injective mapping from X to Y, and an injective mapping from Y to X. We then write |X| = |Y|. The cardinal number of X itself is often defined as the least ordinal a with |a| = |X|. This is called the von Neumann cardinal assignment; for this definition to make sense, it must be proved that every set has the same cardinality as some ordinal; this statement is the well-ordering principle. It is however possible to discuss the relative cardinality of sets without explicitly assigning names to objects.
The classic example used is that of the infinite hotel paradox, also called Hilbert's paradox of the Grand Hotel. Supposing there is an innkeeper at a hotel with an infinite number of rooms. The hotel is full, and then a new guest arrives. It is possible to fit the extra guest in by asking the guest who was in room 1 to move to room 2, the guest in room 2 to move to room 3, and so on, leaving room 1 vacant. We can explicitly write a segment of this mapping:
1 → 2
2 → 3
3 → 4
...
n → n + 1
...
With this assignment, we can see that the set {1,2,3,...} has the same cardinality as the set {2,3,4,...}, since a bijection between the first and the second has been shown. This motivates the definition of an infinite set being any set that has a proper subset of the same cardinality (i.e., a Dedekind-infinite set); in this case {2,3,4,...} is a proper subset of {1,2,3,...}.
When considering these large objects, one might also want to see if the notion of counting order coincides with that of cardinal defined above for these infinite sets. It happens that it does not; by considering the above example we can see that if some object "one greater than infinity" exists, then it must have the same cardinality as the infinite set we started out with. It is possible to use a different formal notion for number, called ordinals, based on the ideas of counting and considering each number in turn, and we discover that the notions of cardinality and ordinality are divergent once we move out of the finite numbers.
It can be proved that the cardinality of the real numbers is greater than that of the natural numbers just described. This can be visualized using Cantor's diagonal argument; classic questions of cardinality (for instance the continuum hypothesis) are concerned with discovering whether there is some cardinal between some pair of other infinite cardinals. In more recent times, mathematicians have been describing the properties of larger and larger cardinals.
Since cardinality is such a common concept in mathematics, a variety of names are in use. Sameness of cardinality is sometimes referred to as equipotence, equipollence, or equinumerosity. It is thus said that two sets with the same cardinality are, respectively, equipotent, equipollent, or equinumerous.
Formal definition
Formally, assuming the axiom of choice, the cardinality of a set X is the least ordinal number α such that there is a bijection between X and α. This definition is known as the von Neumann cardinal assignment. If the axiom of choice is not assumed, then a different approach is needed. The oldest definition of the cardinality of a set X (implicit in Cantor and explicit in Frege and Principia Mathematica) is as the class [X] of all sets that are equinumerous with X. This does not work in ZFC or other related systems of axiomatic set theory because if X is non-empty, this collection is too large to be a set. In fact, for X ≠ ∅ there is an injection from the universe into [X] by mapping a set m to {m} × X, and so by the axiom of limitation of size, [X] is a proper class. The definition does work however in type theory and in New Foundations and related systems. However, if we restrict from this class to those equinumerous with X that have the least rank, then it will work (this is a trick due to Dana Scott: it works because the collection of objects with any given rank is a set).
Von Neumann cardinal assignment implies that the cardinal number of a finite set is the common ordinal number of all possible well-orderings of that set, and cardinal and ordinal arithmetic (addition, multiplication, power, proper subtraction) then give the same answers for finite numbers. However, they differ for infinite numbers. For example, in ordinal arithmetic while in cardinal arithmetic, although the von Neumann assignment puts . On the other hand, Scott's trick implies that the cardinal number 0 is , which is also the ordinal number 1, and this may be confusing. A possible compromise (to take advantage of the alignment in finite arithmetic while avoiding reliance on the axiom of choice and confusion in infinite arithmetic) is to apply von Neumann assignment to the cardinal numbers of finite sets (those which can be well ordered and are not equipotent to proper subsets) and to use Scott's trick for the cardinal numbers of other sets.
Formally, the order among cardinal numbers is defined as follows: |X| ≤ |Y| means that there exists an injective function from X to Y. The Cantor–Bernstein–Schroeder theorem states that if |X| ≤ |Y| and |Y| ≤ |X| then |X| = |Y|. The axiom of choice is equivalent to the statement that given two sets X and Y, either |X| ≤ |Y| or |Y| ≤ |X|.
A set X is Dedekind-infinite if there exists a proper subset Y of X with |X| = |Y|, and Dedekind-finite if such a subset does not exist. The finite cardinals are just the natural numbers, in the sense that a set X is finite if and only if |X| = |n| = n for some natural number n. Any other set is infinite.
Assuming the axiom of choice, it can be proved that the Dedekind notions correspond to the standard ones. It can also be proved that the cardinal (aleph null or aleph-0, where aleph is the first letter in the Hebrew alphabet, represented ) of the set of natural numbers is the smallest infinite cardinal (i.e., any infinite set has a subset of cardinality ). The next larger cardinal is denoted by , and so on. For every ordinal α, there is a cardinal number and this list exhausts all infinite cardinal numbers.
Cardinal arithmetic
We can define arithmetic operations on cardinal numbers that generalize the ordinary operations for natural numbers. It can be shown that for finite cardinals, these operations coincide with the usual operations for natural numbers. Furthermore, these operations share many properties with ordinary arithmetic.
Successor cardinal
If the axiom of choice holds, then every cardinal κ has a successor, denoted κ+, where κ+ > κ and there are no cardinals between κ and its successor. (Without the axiom of choice, using Hartogs' theorem, it can be shown that for any cardinal number κ, there is a minimal cardinal κ+ such that ) For finite cardinals, the successor is simply κ + 1. For infinite cardinals, the successor cardinal differs from the successor ordinal.
Cardinal addition
If X and Y are disjoint, addition is given by the union of X and Y. If the two sets are not already disjoint, then they can be replaced by disjoint sets of the same cardinality (e.g., replace X by X×{0} and Y by Y×{1}).
Zero is an additive identity κ + 0 = 0 + κ = κ.
Addition is associative (κ + μ) + ν = κ + (μ + ν).
Addition is commutative κ + μ = μ + κ.
Addition is non-decreasing in both arguments:
Assuming the axiom of choice, addition of infinite cardinal numbers is easy. If either κ or μ is infinite, then
Subtraction
Assuming the axiom of choice and, given an infinite cardinal σ and a cardinal μ, there exists a cardinal κ such that μ + κ = σ if and only if μ ≤ σ. It will be unique (and equal to σ) if and only if μ < σ.
Cardinal multiplication
The product of cardinals comes from the Cartesian product.
κ·0 = 0·κ = 0.
κ·μ = 0 → (κ = 0 or μ = 0).
One is a multiplicative identity κ·1 = 1·κ = κ.
Multiplication is associative (κ·μ)·ν = κ·(μ·ν).
Multiplication is commutative κ·μ = μ·κ.
Multiplication is non-decreasing in both arguments:
κ ≤ μ → (κ·ν ≤ μ·ν and ν·κ ≤ ν·μ).
Multiplication distributes over addition:
κ·(μ + ν) = κ·μ + κ·ν and
(μ + ν)·κ = μ·κ + ν·κ.
Assuming the axiom of choice, multiplication of infinite cardinal numbers is also easy. If either κ or μ is infinite and both are non-zero, then
Thus the product of two infinite cardinal numbers is equal to their sum.
Division
Assuming the axiom of choice and, given an infinite cardinal π and a non-zero cardinal μ, there exists a cardinal κ such that μ · κ = π if and only if μ ≤ π. It will be unique (and equal to π) if and only if μ < π.
Cardinal exponentiation
Exponentiation is given by
where XY is the set of all functions from Y to X. It is easy to check that the right-hand side depends only on and .
κ0 = 1 (in particular 00 = 1), see empty function.
If 1 ≤ μ, then 0μ = 0.
1μ = 1.
κ1 = κ.
κμ + ν = κμ·κν.
κμ · ν = (κμ)ν.
(κ·μ)ν = κν·μν.
Exponentiation is non-decreasing in both arguments:
(1 ≤ ν and κ ≤ μ) → (νκ ≤ νμ) and
(κ ≤ μ) → (κν ≤ μν).
2|X| is the cardinality of the power set of the set X and Cantor's diagonal argument shows that 2|X| > |X| for any set X. This proves that no largest cardinal exists (because for any cardinal κ, we can always find a larger cardinal 2κ). In fact, the class of cardinals is a proper class. (This proof fails in some set theories, notably New Foundations.)
All the remaining propositions in this section assume the axiom of choice:
If κ and μ are both finite and greater than 1, and ν is infinite, then κν = μν.
If κ is infinite and μ is finite and non-zero, then κμ = κ.
If 2 ≤ κ and 1 ≤ μ and at least one of them is infinite, then:
Max (κ, 2μ) ≤ κμ ≤ Max (2κ, 2μ).
Using König's theorem, one can prove κ < κcf(κ) and κ < cf(2κ) for any infinite cardinal κ, where cf(κ) is the cofinality of κ.
Roots
Assuming the axiom of choice and, given an infinite cardinal κ and a finite cardinal μ greater than 0, the cardinal ν satisfying will be .
Logarithms
Assuming the axiom of choice and, given an infinite cardinal κ and a finite cardinal μ greater than 1, there may or may not be a cardinal λ satisfying . However, if such a cardinal exists, it is infinite and less than κ, and any finite cardinality ν greater than 1 will also satisfy .
The logarithm of an infinite cardinal number κ is defined as the least cardinal number μ such that κ ≤ 2μ. Logarithms of infinite cardinals are useful in some fields of mathematics, for example in the study of cardinal invariants of topological spaces, though they lack some of the properties that logarithms of positive real numbers possess.
The continuum hypothesis
The continuum hypothesis (CH) states that there are no cardinals strictly between and The latter cardinal number is also often denoted by ; it is the cardinality of the continuum (the set of real numbers). In this case
Similarly, the generalized continuum hypothesis (GCH) states that for every infinite cardinal , there are no cardinals strictly between and . Both the continuum hypothesis and the generalized continuum hypothesis have been proved to be independent of the usual axioms of set theory, the Zermelo–Fraenkel axioms together with the axiom of choice (ZFC).
Indeed, Easton's theorem shows that, for regular cardinals , the only restrictions ZFC places on the cardinality of are that , and that the exponential function is non-decreasing.
See also
Aleph number
Beth number
The paradox of the greatest cardinal
Cardinal number (linguistics)
Counting
Inclusion–exclusion principle
Large cardinal
Names of numbers in English
Nominal number
Ordinal number
Regular cardinal
References
Notes
Bibliography
Hahn, Hans, Infinity, Part IX, Chapter 2, Volume 3 of The World of Mathematics. New York: Simon and Schuster, 1956.
Halmos, Paul, Naive set theory. Princeton, NJ: D. Van Nostrand Company, 1960. Reprinted by Springer-Verlag, New York, 1974. (Springer-Verlag edition).
External links | Cardinal number | [
"Mathematics"
] | 4,476 | [
"Cardinal numbers",
"Mathematical objects",
"Numbers",
"Infinity"
] |
6,174 | https://en.wikipedia.org/wiki/Cardinality | In mathematics, cardinality describes a relationship between sets which compares their relative size. For example, the sets and are the same size as they each contain 3 elements. Beginning in the late 19th century, this concept was generalized to infinite sets, which allows one to distinguish between different types of infinity, and to perform arithmetic on them. There are two notions often used when referring to cardinality: one which compares sets directly using bijections and injections, and another which uses cardinal numbers.
The cardinality of a set may also be called its size, when no confusion with other notions of size is possible.
When two sets, and , have the same cardinality, it is usually written as ; however, if referring to the cardinal number of an individual set , it is simply denoted , with a vertical bar on each side; this is the same notation as absolute value, and the meaning depends on context. The cardinal number of a set may alternatively be denoted by , , , or .
History
A crude sense of cardinality, an awareness that groups of things or events compare with other groups by containing more, fewer, or the same number of instances, is observed in a variety of present-day animal species, suggesting an origin millions of years ago. Human expression of cardinality is seen as early as years ago, with equating the size of a group with a group of recorded notches, or a representative collection of other things, such as sticks and shells. The abstraction of cardinality as a number is evident by 3000 BCE, in Sumerian mathematics and the manipulation of numbers without reference to a specific group of things or events.
From the 6th century BCE, the writings of Greek philosophers show hints of the cardinality of infinite sets. While they considered the notion of infinity as an endless series of actions, such as adding 1 to a number repeatedly, they did not consider the size of an infinite set of numbers to be a thing. The ancient Greek notion of infinity also considered the division of things into parts repeated without limit. In Euclid's Elements, commensurability was described as the ability to compare the length of two line segments, a and b, as a ratio, as long as there were a third segment, no matter how small, that could be laid end-to-end a whole number of times into both a and b. But with the discovery of irrational numbers, it was seen that even the infinite set of all rational numbers was not enough to describe the length of every possible line segment. Still, there was no concept of infinite sets as something that had cardinality.
To better understand infinite sets, a notion of cardinality was formulated by Georg Cantor, the originator of set theory. He examined the process of equating two sets with bijection, a one-to-one correspondence between the elements of two sets based on a unique relationship. In 1891, with the publication of Cantor's diagonal argument, he demonstrated that there are sets of numbers that cannot be placed in one-to-one correspondence with the set of natural numbers, i.e. uncountable sets that contain more elements than there are in the infinite set of natural numbers.
Comparing sets
While the cardinality of a finite set is simply comparable to its number of elements, extending the notion to infinite sets usually starts with defining the notion of comparison of arbitrary sets (some of which are possibly infinite).
Definition 1: =
Two sets have the same cardinality if there exists a bijection (a.k.a., one-to-one correspondence) from to , that is, a function from to that is both injective and surjective. Such sets are said to be equipotent, equipollent, or equinumerous.
For example, the set of non-negative even numbers has the same cardinality as the set of natural numbers, since the function is a bijection from to (see picture).
For finite sets and , if some bijection exists from to , then each injective or surjective function from to is a bijection. This is no longer true for infinite and . For example, the function from to , defined by is injective, but not surjective since 2, for instance, is not mapped to, and from to , defined by (see: modulo operation) is surjective, but not injective, since 0 and 1 for instance both map to 0. Neither nor can challenge , which was established by the existence of .
Definition 2: ≤
has cardinality less than or equal to the cardinality of , if there exists an injective function from into .
If and , then (a fact known as Schröder–Bernstein theorem). The axiom of choice is equivalent to the statement that or for every and .
Definition 3: <
has cardinality strictly less than the cardinality of , if there is an injective function, but no bijective function, from to .
For example, the set of all natural numbers has cardinality strictly less than its power set , because is an injective function from to , and it can be shown that no function from to can be bijective (see picture). By a similar argument, has cardinality strictly less than the cardinality of the set of all real numbers. For proofs, see Cantor's diagonal argument or Cantor's first uncountability proof.
Cardinal numbers
In the above section, "cardinality" of a set was defined functionally. In other words, it was not defined as a specific object itself. However, such an object can be defined as follows.
The relation of having the same cardinality is called equinumerosity, and this is an equivalence relation on the class of all sets. The equivalence class of a set A under this relation, then, consists of all those sets which have the same cardinality as A. There are two ways to define the "cardinality of a set":
The cardinality of a set A is defined as its equivalence class under equinumerosity.
A representative set is designated for each equivalence class. The most common choice is the initial ordinal in that class. This is usually taken as the definition of cardinal number in axiomatic set theory.
Assuming the axiom of choice, the cardinalities of the infinite sets are denoted
For each ordinal , is the least cardinal number greater than .
The cardinality of the natural numbers is denoted aleph-null (), while the cardinality of the real numbers is denoted by "" (a lowercase fraktur script "c"), and is also referred to as the cardinality of the continuum. Cantor showed, using the diagonal argument, that . We can show that , this also being the cardinality of the set of all subsets of the natural numbers.
The continuum hypothesis says that , i.e. is the smallest cardinal number bigger than , i.e. there is no set whose cardinality is strictly between that of the integers and that of the real numbers. The continuum hypothesis is independent of ZFC, a standard axiomatization of set theory; that is, it is impossible to prove the continuum hypothesis or its negation from ZFC—provided that ZFC is consistent. For more detail, see § Cardinality of the continuum below.
Finite, countable and uncountable sets
If the axiom of choice holds, the law of trichotomy holds for cardinality. Thus we can make the following definitions:
Any set X with cardinality less than that of the natural numbers, or | X | < | N |, is said to be a finite set.
Any set X that has the same cardinality as the set of the natural numbers, or | X | = | N | = , is said to be a countably infinite set.
Any set X with cardinality greater than that of the natural numbers, or | X | > | N |, for example | R | = > | N |, is said to be uncountable.
Infinite sets
Our intuition gained from finite sets breaks down when dealing with infinite sets. In the late 19th century Georg Cantor, Gottlob Frege, Richard Dedekind and others rejected the view that the whole cannot be the same size as the part. One example of this is Hilbert's paradox of the Grand Hotel.
Indeed, Dedekind defined an infinite set as one that can be placed into a one-to-one correspondence with a strict subset (that is, having the same size in Cantor's sense); this notion of infinity is called Dedekind infinite. Cantor introduced the cardinal numbers, and showed—according to his bijection-based definition of size—that some infinite sets are greater than others. The smallest infinite cardinality is that of the natural numbers ().
Cardinality of the continuum
One of Cantor's most important results was that the cardinality of the continuum () is greater than that of the natural numbers (); that is, there are more real numbers R than natural numbers N. Namely, Cantor showed that (see Beth one) satisfies:
(see Cantor's diagonal argument or Cantor's first uncountability proof).
The continuum hypothesis states that there is no cardinal number between the cardinality of the reals and the cardinality of the natural numbers, that is,
However, this hypothesis can neither be proved nor disproved within the widely accepted ZFC axiomatic set theory, if ZFC is consistent.
Cardinal arithmetic can be used to show not only that the number of points in a real number line is equal to the number of points in any segment of that line, but that this is equal to the number of points on a plane and, indeed, in any finite-dimensional space. These results are highly counterintuitive, because they imply that there exist proper subsets and proper supersets of an infinite set S that have the same size as S, although S contains elements that do not belong to its subsets, and the supersets of S contain elements that are not included in it.
The first of these results is apparent by considering, for instance, the tangent function, which provides a one-to-one correspondence between the interval (−½π, ½π) and R (see also Hilbert's paradox of the Grand Hotel).
The second result was first demonstrated by Cantor in 1878, but it became more apparent in 1890, when Giuseppe Peano introduced the space-filling curves, curved lines that twist and turn enough to fill the whole of any square, or cube, or hypercube, or finite-dimensional space. These curves are not a direct proof that a line has the same number of points as a finite-dimensional space, but they can be used to obtain such a proof.
Cantor also showed that sets with cardinality strictly greater than exist (see his generalized diagonal argument and theorem). They include, for instance:
the set of all subsets of R, i.e., the power set of R, written P(R) or 2R
the set RR of all functions from R to R
Both have cardinality
(see Beth two).
The cardinal equalities and can be demonstrated using cardinal arithmetic:
Examples and properties
If X = {a, b, c} and Y = {apples, oranges, peaches}, where a, b, and c are distinct, then | X | = | Y | because { (a, apples), (b, oranges), (c, peaches)} is a bijection between the sets X and Y. The cardinality of each of X and Y is 3.
If | X | ≤ | Y |, then there exists Z such that | X | = | Z | and Z ⊆ Y.
If | X | ≤ | Y | and | Y | ≤ | X |, then | X | = | Y |. This holds even for infinite cardinals, and is known as Cantor–Bernstein–Schroeder theorem.
Sets with cardinality of the continuum include the set of all real numbers, the set of all irrational numbers and the interval .
Union and intersection
If A and B are disjoint sets, then
From this, one can show that in general, the cardinalities of unions and intersections are related by the following equation:
Definition of cardinality in class theory (NBG or MK)
Here denote a class of all sets, and denotes the class of all ordinal numbers.
We use the intersection of a class which is defined by , therefore .
In this case
.
This definition allows also obtain a cardinality of any proper class , in particular
This definition is natural since it agrees with the axiom of limitation of size which implies bijection between and any proper class.
See also
Aleph number
Beth number
Cantor's paradox
Cantor's theorem
Countable set
Counting
Ordinality
Pigeonhole principle
References
Basic concepts in infinite set theory | Cardinality | [
"Mathematics"
] | 2,668 | [
"Cardinal numbers",
"Basic concepts in infinite set theory",
"Mathematical objects",
"Infinity",
"Basic concepts in set theory",
"Numbers"
] |
6,185 | https://en.wikipedia.org/wiki/Chuck%20Yeager | Brigadier General Charles Elwood Yeager ( , February 13, 1923December 7, 2020) was a United States Air Force officer, flying ace, and record-setting test pilot who in October 1947 became the first pilot in history confirmed to have exceeded the speed of sound in level flight.
Yeager was raised in Hamlin, West Virginia. His career began in World War II as a private in the United States Army, assigned to the Army Air Forces in 1941. After serving as an aircraft mechanic, in September 1942, he entered enlisted pilot training and upon graduation was promoted to the rank of flight officer (the World War II Army Air Force version of the Army's warrant officer), later achieving most of his aerial victories as a P-51 Mustang fighter pilot on the Western Front, where he was credited with shooting down 11.5 enemy aircraft (the half credit is from a second pilot assisting him in a single shootdown). On October 12, 1944, he attained "ace in a day" status, shooting down five enemy aircraft in one mission.
After the war, Yeager became a test pilot and flew many types of aircraft, including experimental rocket-powered aircraft for the National Advisory Committee for Aeronautics (NACA). Through the NACA program, he became the first human to officially break the sound barrier on October 14, 1947, when he flew the experimental Bell X-1 at Mach 1.05 at an altitude of , for which he won both the Collier and Mackay trophies in 1948. He then went on to break several other speed and altitude records in the following years. In 1962, he became the first commandant of the USAF Aerospace Research Pilot School, which trained and produced astronauts for NASA and the Air Force.
Yeager later commanded fighter squadrons and wings in Germany, as well as in Southeast Asia during the Vietnam War. In recognition of his achievements and the outstanding performance ratings of those units, he was promoted to brigadier general in 1969 and inducted into the National Aviation Hall of Fame in 1973, retiring on March 1, 1975 (for its colloquial similarity to "Mach 1"). His three-war active-duty flying career spanned more than 30 years and took him to many parts of the world, including the Korean War zone and the Soviet Union during the height of the Cold War.
Yeager is referred to by many as one of the greatest pilots of all time, and was ranked fifth on Flying list of the 51 Heroes of Aviation in 2013. Throughout his life, he flew more than 360 different types of aircraft over a 70-year period, and continued to fly for two decades after retirement as a consultant pilot for the United States Air Force. In 2020 at the age of 97, Yeager died in a Los Angeles-area hospital.
Early life and education
Yeager was born February 13, 1923, in Myra, West Virginia, to farming parents Albert Hal Yeager (1896–1963) and Susie Mae Yeager (; 1898–1987). When he was five years old, his family moved to Hamlin, West Virginia. Yeager had two brothers, Roy and Hal Jr., and two sisters, Doris Ann (accidentally killed at age two by four-year-old Roy playing with a firearm) and Pansy Lee.
He attended Hamlin High School, where he played basketball and football, receiving his best grades in geometry and typing. He graduated from high school in June 1941.
His first experience with the military was as a teen at the Citizens Military Training Camp at Fort Benjamin Harrison, Indianapolis, Indiana, during the summers of 1939 and 1940. On February 26, 1945, Yeager married Glennis Dickhouse, and the couple had four children. Glennis Yeager died in 1990, predeceasing her husband by 30 years.
His cousin, Steve Yeager, was a professional baseball catcher.
Career
World War II
Yeager enlisted as a private in the U.S. Army Air Forces (USAAF) on September 12, 1941, and became an aircraft mechanic at George Air Force Base, Victorville, California. At enlistment, Yeager was not eligible for flight training because of his age and educational background, but the entry of the U.S. into World War II less than three months later prompted the USAAF to alter its recruiting standards. Yeager had unusually sharp vision (a visual acuity rated 20/10), which once enabled him to shoot a deer at .
At the time of his flight training acceptance, he was a crew chief on an AT-11. He received his pilot wings and a promotion to flight officer at Luke Field, Arizona, where he graduated from Class 43C on March 10, 1943. Assigned to the 357th Fighter Group at Tonopah, Nevada, he initially trained as a fighter pilot, flying Bell P-39 Airacobras (being grounded for seven days for clipping a farmer's tree during a training flight), and shipped overseas with the group on November 23, 1943.
Stationed in the United Kingdom at RAF Leiston, Yeager flew P-51 Mustangs in combat with the 363d Fighter Squadron. He named his aircraft Glamorous Glen after his girlfriend, Glennis Faye Dickhouse, who became his wife in February 1945. Yeager had gained one victory before he was shot down over France in his first aircraft (P-51B-5-NA s/n 43-6763) on March 5, 1944, on his eighth mission. He escaped to Spain on March 30, 1944, with the help of the Maquis (French Resistance) and returned to England on May 15, 1944. During his stay with the Maquis, Yeager assisted the guerrillas in duties that did not involve direct combat; he helped construct bombs for the group, a skill that he had learned from his father. He was awarded the Bronze Star for helping a navigator, Omar M. "Pat" Patterson Jr., to cross the Pyrenees.
Despite a regulation prohibiting "evaders" (escaped pilots) from flying over enemy territory again, the purpose of which was to prevent resistance groups from being compromised by giving the enemy a second chance to possibly capture him, Yeager was reinstated to flying combat. He had joined another evader, fellow P-51 pilot 1st Lt Fred Glover, in speaking directly to the Supreme Allied Commander, General Dwight D. Eisenhower, on June 12, 1944. "I raised so much hell that General Eisenhower finally let me go back to my squadron" Yeager said. "He cleared me for combat after D Day, because all the free Frenchmen – Maquis and people like that – had surfaced". Eisenhower, after gaining permission from the War Department to decide the requests, concurred with Yeager and Glover. In the meantime, Yeager shot down his second enemy aircraft, a German Junkers Ju 88 bomber, over the English Channel.
Yeager demonstrated outstanding flying skills and combat leadership. On October 12, 1944, he became the first pilot in his group to make "ace in a day," downing five enemy aircraft in a single mission. Two of these victories were scored without firing a single shot: when he flew into firing position against a Messerschmitt Bf 109, the pilot of the aircraft panicked, breaking to port and colliding with his wingman. Yeager said both pilots bailed out. He finished the war with 11.5 official victories, including one of the first air-to-air victories over a jet fighter, a German Messerschmitt Me 262 that he shot down as it was on final approach for landing.
In his 1986 memoirs, Yeager recalled with disgust that "atrocities were committed by both sides", and said he went on a mission with orders from the Eighth Air Force to "strafe anything that moved". During the mission briefing, he whispered to Major Donald H. Bochkay, "If we are going to do things like this, we sure as hell better make sure we are on the winning side". Yeager said, "I'm certainly not proud of that particular strafing mission against civilians. But it is there, on the record and in my memory". He also expressed bitterness at his treatment in England during World War II, describing the British as "arrogant" and "nasty" on Twitter.
Yeager was commissioned a second lieutenant while at Leiston, and was promoted to captain before the end of his tour. He flew his 61st and final mission on January 15, 1945, and returned to the United States in early February 1945. As an evader, he received his choice of assignments and, because his new wife was pregnant, chose Wright Field to be near his home in West Virginia. His high number of flight hours and maintenance experience qualified him to become a functional test pilot of repaired aircraft, which brought him under the command of Colonel Albert Boyd, head of the Aeronautical Systems Flight Test Division.
Post-World War II
Test pilot – breaking the sound barrier
Yeager remained in the U.S. Army Air Forces after the war, becoming a test pilot at Muroc Army Air Field (now Edwards Air Force Base), following graduation from Air Materiel Command Flight Performance School (Class 46C). After Bell Aircraft test pilot Chalmers "Slick" Goodlin demanded to break the sound "barrier", the USAAF selected the 24-year-old Yeager to fly the rocket-powered Bell XS-1 in a NACA program to research high-speed flight. Under the National Security Act of 1947, the USAAF became the United States Air Force (USAF) on September 18.
Such was the difficulty, that the answers to many of the inherent challenges were like "Yeager better have paid-up insurance". Two nights before the scheduled flight date, Yeager broke two ribs when he fell from a horse. He was worried that the injury would remove him from the mission and reported that he went to a civilian doctor in nearby Rosamond, who taped his ribs. Besides his wife who was riding with him, Yeager told only his friend and fellow project pilot Jack Ridley about the accident. On the day of the flight, Yeager was in such pain that he could not seal the X-1's hatch by himself. Ridley rigged up a device, using the end of a broom handle as an extra lever, to allow Yeager to seal the hatch.
Yeager broke the sound barrier on October 14, 1947, in level flight while piloting the X-1 Glamorous Glennis at Mach 1.05 at an altitude of over the Rogers Dry Lake of the Mojave Desert in California. The success of the mission was not announced to the public for nearly eight months, until June 10, 1948. Yeager was awarded the Mackay Trophy and the Collier Trophy in 1948 for his mach-transcending flight, and the Harmon International Trophy in 1954. The X-1 he flew that day was later put on permanent display at the Smithsonian Institution's National Air and Space Museum. During 1952, he attended the Air Command and Staff College.
Yeager continued to break many speed and altitude records. He was one of the first American pilots to fly a Mikoyan-Gurevich MiG-15, after its pilot, No Kum-sok, defected to South Korea. Returning to Muroc, during the latter half of 1953, Yeager was involved with the USAF team that was working on the X-1A, an aircraft designed to surpass Mach 2 in level flight. That year, he flew a chase aircraft for the civilian pilot Jackie Cochran as she became the first woman to fly faster than sound.
On November 20, 1953, the U.S. Navy program involving the Douglas D-558-II Skyrocket and its pilot, Scott Crossfield, became the first team to reach twice the speed of sound. After they were bested, Ridley and Yeager decided to beat rival Crossfield's speed record in a series of test flights that they dubbed "Operation NACA Weep". Not only did they beat Crossfield by setting a new record at Mach 2.44 on December 12, 1953, but they did it in time to spoil a celebration planned for the 50th anniversary of flight in which Crossfield was to be called "the fastest man alive".
The new record flight, however, did not entirely go to plan, since shortly after reaching Mach 2.44, Yeager lost control of the X-1A at about due to inertia coupling, a phenomenon largely unknown at the time. With the aircraft simultaneously rolling, pitching, and yawing out of control, Yeager dropped in less than a minute before regaining control at around . He then managed to land without further incident. For this feat, Yeager was awarded the Distinguished Service Medal (DSM) in 1954.
Military command
Yeager was foremost a fighter pilot and held several squadron and wing commands. From 1954 to 1957, he commanded the F-86H Sabre-equipped 417th Fighter-Bomber Squadron (50th Fighter-Bomber Wing) at Hahn AB, West Germany, and Toul-Rosieres Air Base, France; and from 1957 to 1960 the F-100D Super Sabre-equipped 1st Fighter Day Squadron at George Air Force Base, California, and Morón Air Base, Spain.
He was a full colonel in 1962, after completion of a year's studies and final thesis on STOL aircraft at the Air War College. He became the first commandant of the USAF Aerospace Research Pilot School, which produced astronauts for NASA and the USAF, after its redesignation from the USAF Flight Test Pilot School. He had only a high school education, so he was not eligible to become an astronaut like those he trained. In April 1962, Yeager made his only flight with Neil Armstrong. Their job, flying a T-33, was to evaluate Smith Ranch Dry Lake in Nevada for use as an emergency landing site for the North American X-15. In his autobiography, he wrote that he knew the lake bed was unsuitable for landings after recent rains, but Armstrong insisted on flying out anyway. As Armstrong suggested that they do a touch-and-go, Yeager advised against it, telling him "You may touch, but you ain't gonna go!" When Armstrong did touch down, the wheels became stuck in the mud, bringing the plane to a sudden stop and provoking Yeager to fits of laughter. They had to wait for rescue.
Yeager's participation in the test pilot training program for NASA included controversial behavior. Yeager reportedly did not believe that Ed Dwight, the first African American pilot admitted into the program, should be a part of it. In the 2019 documentary series Chasing the Moon, the filmmakers made the claim that Yeager instructed staff and participants at the school that "Washington is trying to cram the nigger down our throats. [President] Kennedy is using this to make 'racial equality,' so do not speak to him, do not socialize with him, do not drink with him, do not invite him over to your house, and in six months he'll be gone." In his autobiography, Dwight details how Yeager's leadership led to discriminatory treatment throughout his training at Edwards Air Force Base.
Between December 1963 and January 1964, Yeager completed five flights in the NASA M2-F1 lifting body. An accident during a December 1963 test flight in one of the school's NF-104s resulted in serious injuries. After climbing to a near-record altitude, the plane's controls became ineffective, and it entered a flat spin. After several turns, and an altitude loss of approximately 95,000 feet, Yeager ejected from the plane. During the ejection, the seat straps released normally, but the seat base slammed into Yeager, with the still-hot rocket motor breaking his helmet's plastic faceplate and causing his emergency oxygen supply to catch fire. The resulting burns to his face required extensive and agonizing medical care. This was Yeager's last attempt at setting test-flying records.
In 1966, Yeager took command of the 405th Tactical Fighter Wing at Clark Air Base, the Philippines, whose squadrons were deployed on rotational temporary duty (TDY) in South Vietnam and elsewhere in Southeast Asia. There he flew 127 missions. In February 1968, Yeager was assigned command of the 4th Tactical Fighter Wing at Seymour Johnson Air Force Base, North Carolina, and led the McDonnell Douglas F-4 Phantom II wing in South Korea during the Pueblo crisis.
Yeager was promoted to brigadier general and was assigned in July 1969 as the vice-commander of the Seventeenth Air Force.
From 1971 to 1973, at the behest of Ambassador Joseph Farland, Yeager was assigned as the Air Attache in Pakistan to advise the Pakistan Air Force which was led by Abdur Rahim Khan (the first Pakistani to break the sound barrier). He arrived in Pakistan at a time when tensions with India were at a high level. One of Yeager's jobs during this time was to assist Pakistani technicians in installing AIM-9 Sidewinders on PAF's Shenyang F-6 fighters. He also had a keen interest in interacting with PAF personnel from various Pakistani Squadrons and helping them develop combat tactics. In one instance in 1972, while visiting the No. 15 Squadron "Cobras" at Peshawar Airbase, the Squadron's OC Wing Commander Najeeb Khan escorted him to K2 in a pair of F-86Fs after Yeager requested a visit to the second highest mountain on Earth. After hostilities broke out in 1971, he decided to stay in West Pakistan and continued overseeing the PAF's operations. Yeager recalled "the Pakistanis whipped the Indians' asses in the sky... the Pakistanis scored a three-to-one kill ratio, knocking out 102 Russian-made Indian jets and losing 34 airplanes of their own". During the war, he flew around the western front in a helicopter documenting wreckages of Indian aircraft of Soviet origin which included Sukhoi Su-7s and MiG-21s. These aircraft were transported to the United States after the war for analysis. Yeager also flew around in his Beechcraft Queen Air, a small passenger aircraft that was assigned to him by the Pentagon, picking up shot-down Indian fighter pilots. The Beechcraft was later destroyed during an air raid by the IAF at a Pakistani airbase when Yeager was not present. Edward C. Ingraham, a U.S. diplomat who had served as political counselor to Ambassador Farland in Islamabad, recalled this incident in the Washington Monthly of October 1985: "After Yeager's Beechcraft was destroyed during an Indian air raid, he raged to his cowering colleagues that the Indian pilot had been specifically instructed by Indira Gandhi to blast his plane. 'It was', he later wrote, 'the Indian way of giving Uncle Sam the finger'". Yeager was incensed over the incident and demanded U.S. retaliation.
Post-retirement and in popular culture
On March 1, 1975, Yeager retired from the Air Force at Norton Air Force Base, California.
Yeager made a cameo appearance in the movie The Right Stuff (1983). He played "Fred", a bartender at "Pancho's Place", which was most appropriate, because he said, "if all the hours were ever totaled, I reckon I spent more time at her place than in a cockpit over those years". Sam Shepard portrayed Yeager in the film, which chronicles in part his famous 1947 record-breaking flight.
Yeager has been referenced several times in the shared Star Trek universe, including having a namesake fictional type of starship, a dangerous starship formation-maneuver named after him called the "Yeager Loop" (most notably mentioned in the Star Trek: The Next Generation episode "The First Duty"), and appearing in archival footage within the opening title sequence for the series Star Trek: Enterprise (2001–2005). For Enterprise, executive producer Rick Berman said that he envisaged the lead character, Captain Jonathan Archer, as being "halfway between Chuck Yeager and Han Solo".
For several years in the 1980s, Yeager was connected to General Motors, publicizing ACDelco, the company's automotive parts division. In 1986, he was invited to drive the Chevrolet Corvette pace car for the 70th running of the Indianapolis 500. In 1988, Yeager was again invited to drive the pace car, this time at the wheel of an Oldsmobile Cutlass Supreme. In 1986, President Reagan appointed Yeager to the Rogers Commission that investigated the explosion of the Space Shuttle Challenger.
During this time, Yeager also served as a technical adviser for three Electronic Arts flight simulator video games. The games include Chuck Yeager's Advanced Flight Trainer, Chuck Yeager's Advanced Flight Trainer 2.0, and Chuck Yeager's Air Combat. The game manuals feature quotes and anecdotes from Yeager and were well received by players. Missions feature several of Yeager's accomplishments and let players challenge his records. Chuck Yeager's Advanced Flight Trainer was Electronic Art's top-selling game for 1987.
In 2009, Yeager participated in the documentary The Legend of Pancho Barnes and the Happy Bottom Riding Club, a profile of his friend Pancho Barnes. The documentary was screened at film festivals, aired on public television in the United States, and won an Emmy Award.
On October 14, 1997, on the 50th anniversary of his historic flight past Mach 1, he flew a new Glamorous Glennis III, an F-15D Eagle, past Mach 1. The chase plane for the flight was an F-16 Fighting Falcon piloted by Bob Hoover, a longtime test, fighter, and aerobatic pilot who had been Yeager's wingman for the first supersonic flight. At the end of his speech to the crowd in 1997, Yeager concluded, "All that I am ... I owe to the Air Force". Later that month, he was the recipient of the Tony Jannus Award for his achievements.
On October 14, 2012, on the 65th anniversary of breaking the sound barrier, Yeager did it again at the age of 89, flying as co-pilot in a McDonnell Douglas F-15 Eagle piloted by Captain David Vincent out of Nellis Air Force Base.
In October 2016, Yeager reached international headlines when a Twitter argument he was having with an Irish teenager led to him lashing out at the British and Irish, namely calling Irish people British, and labeling all British people as "nasty" and "arrogant". No stranger to controversy in his life, this was one of Yeager's last major public faux-pas.
Awards and decorations
In 1973, Yeager was inducted into the National Aviation Hall of Fame, arguably aviation's highest honor. In 1974, Yeager received the Golden Plate Award of the American Academy of Achievement. In December 1975, the U.S. Congress awarded Yeager a silver medal "equivalent to a noncombat Medal of Honor ... for contributing immeasurably to aerospace science by risking his life in piloting the X-1 research airplane faster than the speed of sound on October 14, 1947". President Gerald Ford presented the medal to Yeager in a ceremony at the White House on December 8, 1976.
Yeager never attended college and was often modest about his background, but is considered by many, including Flying Magazine, the California Hall of Fame, the State of West Virginia, National Aviation Hall of Fame, a few U.S. presidents, and the United States Army Air Force, to be one of the greatest pilots of all time. Air & Space/Smithsonian magazine ranked him the fifth greatest pilot of all time in 2003. Regardless of his lack of higher education, West Virginia's Marshall University named its highest academic scholarship the Society of Yeager Scholars in his honor. He was the chairman of Experimental Aircraft Association (EAA)'s Young Eagle Program from 1994 to 2004, and was named the program's chairman emeritus.
In 1966, Yeager was inducted into the International Air & Space Hall of Fame. He was inducted into the International Space Hall of Fame in 1981. He was inducted into the Aerospace Walk of Honor 1990 inaugural class.
Yeager Airport in Charleston, West Virginia, is named in his honor. The Interstate 64/Interstate 77 bridge over the Kanawha River in Charleston is named in his honor. He also flew directly under the Kanawha Bridge and West Virginia named it the Chuck E. Yeager Bridge. On October 19, 2006, the state of West Virginia also honored Yeager with a marker along Corridor G (part of U.S. Highway 119) in his home Lincoln County, and also renamed part of it the Yeager Highway.
Yeager was an honorary board member of the humanitarian organization Wings of Hope. On August 25, 2009, Governor Arnold Schwarzenegger and Maria Shriver announced that Yeager would be one of 13 California Hall of Fame inductees in The California Museum's yearlong exhibit. The induction ceremony was on December 1, 2009, in Sacramento, California. Flying Magazine ranked Yeager number 5 on its 2013 list of The 51 Heroes of Aviation; for many years, he was the highest-ranked living person on the list.
The Civil Air Patrol, the volunteer auxiliary of the USAF, awards the Charles E. "Chuck" Yeager Award to its senior members as part of its Aerospace Education program.
Other achievements
1940–1949 – Harmon Trophy: Citation of Honorable Mention
1947 – Collier Trophy and Mackay Trophy, for breaking the sound barrier for the first time.
1953 – Harmon Trophy
1976 – Congressional Silver Medal
Dates of rank
Aerial victory credits
Personal life
Yeager named his plane after his wife, Glennis, as a good-luck charm: "You're my good-luck charm, hon. Any airplane I name after you always brings me home." Yeager and Glennis moved to Grass Valley, California, after his retirement from the Air Force in 1975. The couple prospered as a result of Yeager's best-selling autobiography, speaking engagements, and commercial ventures. Glennis Yeager died of ovarian cancer in 1990. They had four children (Susan, Don, Mickey, and Sharon). Yeager's son Mickey (Michael) died unexpectedly in Oregon, on March 26, 2011.
Yeager appeared in a Texas advertisement for George H. W. Bush's 1988 presidential campaign.
In 2000, Yeager met actress Victoria Scott D'Angelo on a hiking trail in Nevada County. The pair started dating shortly thereafter, and married in August 2003. A bitter dispute arose between Yeager, his children, and D'Angelo. The children contended that she, at least 35 years Yeager's junior, had married him for his fortune. Yeager and D'Angelo both denied the charge. Litigation ensued, in which his children accused D'Angelo of "undue influence" on Yeager, and Yeager accused his children of diverting millions of dollars from his assets. In August 2008, the California Court of Appeal ruled for Yeager, finding that his daughter Susan had breached her duty as trustee.
Yeager lived in Grass Valley, Northern California and died in the afternoon of December 7, 2020 (National Pearl Harbor Remembrance Day), at age 97, in a Los Angeles hospital. Following his death, President Donald Trump issued a statement of condolences stating Yeager "was one of the greatest pilots in history, a proud West Virginian, and an American original who relentlessly pushed the boundaries of human achievement".
See also
History of aviation
List of firsts in aviation
Society of Experimental Test Pilots
Notes
References
Further reading
Wolfe, Tom The Right Stuff New York: Farrar-Straus-Giroux, 1979
Yeager, Chuck, Bob Cardenas, Bob Hoover, Jack Russell and James Young The Quest for Mach One: A First-Person Account of Breaking the Sound Barrier New York: Penguin Studio, 1997
Yeager, Chuck and Leo Janos, Yeager: An Autobiography New York: Bantam, 1985
External links
Biography from ChuckYeager.org
U.S. Air Force: Chuck Yeager biography
Yeager in Biography.com
Biography in the National Aviation Hall of Fame
General Chuck Yeager, USAF, Biography and Interview with the American Academy of Achievement
[http://airportjournals.com/chuck-yeager-booming-and-zooming-part-1/ Airport Journals''' "Chuck Yeager: Booming And Zooming" Part 1] and Part 2
"Chuck Yeager & the Sound Barrier" in Aerospaceweb.org
Space.com: Chuck Yeager
Yeager obituary via The New York Times''
1923 births
2020 deaths
American aviation record holders
American expatriates in Pakistan
American people of German descent
American test pilots
American Vietnam War pilots
American World War II flying aces
Articles containing video clips
Aviation history of the United States
American aviation pioneers
Aviators from West Virginia
Collier Trophy recipients
Experimental Aircraft Association
Harmon Trophy winners
Knights of the Legion of Honour
Mackay Trophy winners
Military personnel from West Virginia
National Aviation Hall of Fame inductees
Order of National Security Merit members
People from Hamlin, West Virginia
People from Lincoln County, West Virginia
Presidential Medal of Freedom recipients
Recipients of the Air Force Distinguished Service Medal
Recipients of the Air Medal
Recipients of the Distinguished Flying Cross (United States)
Recipients of the Distinguished Service Medal (US Army)
Recipients of the Legion of Merit
Recipients of the Silver Star
Shot-down aviators
Survivors of aviation accidents or incidents
U.S. Air Force Test Pilot School alumni
United States Air Force generals
United States Air Force personnel of the Vietnam War
United States Army Air Forces officers
United States Army Air Forces pilots of World War II | Chuck Yeager | [
"Engineering"
] | 6,086 | [
"Experimental Aircraft Association",
"Aerospace engineering organizations"
] |
6,198 | https://en.wikipedia.org/wiki/Convention%20on%20Biological%20Diversity | The Convention on Biological Diversity (CBD), known informally as the Biodiversity Convention, is a multilateral treaty. The Convention has three main goals: the conservation of biological diversity (or biodiversity); the sustainable use of its components; and the fair and equitable sharing of benefits arising from genetic resources. Its objective is to develop national strategies for the conservation and sustainable use of biological diversity, and it is often seen as the key document regarding sustainable development.
The Convention was opened for signature at the Earth Summit in Rio de Janeiro on 5 June 1992 and entered into force on 29 December 1993. The United States is the only UN member state which has not ratified the Convention. It has two supplementary agreements, the Cartagena Protocol and Nagoya Protocol.
The Cartagena Protocol on Biosafety to the Convention on Biological Diversity is an international treaty governing the movements of living modified organisms (LMOs) resulting from modern biotechnology from one country to another. It was adopted on 29 January 2000 as a supplementary agreement to the CBD and entered into force on 11 September 2003.
The Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization (ABS) to the Convention on Biological Diversity is another supplementary agreement to the CBD. It provides a transparent legal framework for the effective implementation of one of the three objectives of the CBD: the fair and equitable sharing of benefits arising out of the utilization of genetic resources. The Nagoya Protocol was adopted on 29 October 2010 in Nagoya, Japan, and entered into force on 12 October 2014.
2010 was also the International Year of Biodiversity, and the Secretariat of the CBD was its focal point. Following a recommendation of CBD signatories at Nagoya, the UN declared 2011 to 2020 as the United Nations Decade on Biodiversity in December 2010. The Convention's Strategic Plan for Biodiversity 2011–2020, created in 2010, include the Aichi Biodiversity Targets.
The meetings of the Parties to the Convention are known as Conferences of the Parties (COP), with the first one (COP 1) held in Nassau, Bahamas, in 1994 and the most recent one (COP 16) in 2024 in Cali, Colombia.
In the area of marine and coastal biodiversity CBD's focus at present is to identify Ecologically or Biologically Significant Marine Areas (EBSAs) in specific ocean locations based on scientific criteria. The aim is to create an international legally binding instrument (ILBI) involving area-based planning and decision-making under UNCLOS to support the conservation and sustainable use of marine biological diversity beyond areas of national jurisdiction (BBNJ treaty or High Seas Treaty).
Origin and scope
The notion of an international convention on biodiversity was conceived at a United Nations Environment Programme (UNEP) Ad Hoc Working Group of Experts on Biological Diversity in November 1988. The subsequent year, the Ad Hoc Working Group of Technical and Legal Experts was established for the drafting of a legal text which addressed the conservation and sustainable use of biological diversity, as well as the sharing of benefits arising from their utilization with sovereign states and local communities.
In 1991, an intergovernmental negotiating committee was established, tasked with finalizing the Convention's text.
A Conference for the Adoption of the Agreed Text of the Convention on Biological Diversity was held in Nairobi, Kenya, in 1992, and its conclusions were distilled in the Nairobi Final Act. The Convention's text was opened for signature on 5 June 1992 at the United Nations Conference on Environment and Development (the Rio "Earth Summit"). By its closing date, 4 June 1993, the Convention had received 168 signatures. It entered into force on 29 December 1993.
The Convention recognized for the first time in international law that the conservation of biodiversity is "a common concern of humankind" and is an integral part of the development process. The agreement covers all ecosystems, species, and genetic resources. It links traditional conservation efforts to the economic goal of using biological resources sustainably. It sets principles for the fair and equitable sharing of the benefits arising from the use of genetic resources, notably those destined for commercial use. It also covers the rapidly expanding field of biotechnology through its Cartagena Protocol on Biosafety, addressing technology development and transfer, benefit-sharing and biosafety issues. Importantly, the Convention is legally binding; countries that join it ('Parties') are obliged to implement its provisions.
The Convention reminds decision-makers of the finite status of natural resources and sets out a philosophy of sustainable use. While past conservation efforts were aimed at protecting particular species and habitats, the Convention recognizes that ecosystems, species and genes must be used for the benefit of humans. However, this should be done in a way and at a rate that does not lead to the long-term decline of biological diversity.
The Convention also offers decision-makers guidance based on the precautionary principle which demands that where there is a threat of significant reduction or loss of biological diversity, lack of full scientific certainty should not be used as a reason for postponing measures to avoid or minimize such a threat. The Convention acknowledges that substantial investments are required to conserve biological diversity. It argues, however, that conservation will bring us significant environmental, economic and social benefits in return.
The Convention on Biological Diversity of 2010 banned some forms of geoengineering.
Executive secretary
As of April 2024, the acting executive secretary is Astrid Schomaker.
The previous executive secretaries were: David Cooper (2023–2024), Elizabeth Maruma Mrema (2020–2023),
:pl:Cristiana Pașca Palmer (2017–2019), Braulio Ferreira de Souza Dias (2012–2017), Ahmed Djoghlaf (2006–2012), Hamdallah Zedan (1998–2005), Calestous Juma (1995–1998), and Angela Cropper (1993–1995).
Issues
Some of the many issues dealt with under the Convention include:
Measures the incentives for the conservation and sustainable use of biological diversity.
Regulated access to genetic resources and traditional knowledge, including Prior Informed Consent of the party providing resources.
Sharing, in a fair and equitable way, the results of research and development and the benefits arising from the commercial and other utilization of genetic resources with the Contracting Party providing such resources (governments and/or local communities that provided the traditional knowledge or biodiversity resources utilized).
Access to and transfer of technology, including biotechnology, to the governments and/or local communities that provided traditional knowledge and/or biodiversity resources.
Technical and scientific cooperation.
Coordination of a global directory of taxonomic expertise (Global Taxonomy Initiative).
Impact assessment.
Education and public awareness.
Provision of financial resources.
National reporting on efforts to implement treaty commitments.
International bodies established
Conference of the Parties (COP)
The Convention's governing body is the Conference of the Parties (COP), consisting of all governments (and regional economic integration organizations) that have ratified the treaty. This ultimate authority reviews progress under the Convention, identifies new priorities, and sets work plans for members. The COP can also make amendments to the Convention, create expert advisory bodies, review progress reports by member nations, and collaborate with other international organizations and agreements.
The Conference of the Parties uses expertise and support from several other bodies that are established by the Convention. In addition to committees or mechanisms established on an ad hoc basis, the main organs are:
CBD Secretariat
The CBD Secretariat, based in Montreal, Quebec, Canada, operates under UNEP, the United Nations Environment Programme. Its main functions are to organize meetings, draft documents, assist member governments in the implementation of the programme of work, coordinate with other international organizations, and collect and disseminate information.
Subsidiary Body for Scientific, Technical and Technological Advice (SBSTTA)
The SBSTTA is a committee composed of experts from member governments competent in relevant fields. It plays a key role in making recommendations to the COP on scientific and technical issues. It provides assessments of the status of biological diversity and of various measures taken in accordance with Convention, and also gives recommendations to the Conference of the Parties, which may be endorsed in whole, in part or in modified form by the COPs. SBSTTA had met 26 times, with a 26th meeting taking place in Nairobi, Kenya in 2024.
Subsidiary Body on Implementation
In 2014, the Conference of the Parties to the Convention on Biological Diversity established the Subsidiary Body on Implementation (SBI) to replace the Ad Hoc Open-ended Working Group on Review of Implementation of the Convention. The four functions and core areas of work of SBI are: (a) review of progress in implementation; (b) strategic actions to enhance implementation; (c) strengthening means of implementation; and (d) operations of the Convention and the Protocols. The first meeting of the SBI was held on 2–6 May 2016 and the second meeting was held on 9–13 July 2018, both in Montreal, Canada. The latest (fifth) meeting of the SBI was held in October 2024 in Cali, Colombia. The Bureau of the Conference of the Parties serves as the Bureau of the SBI. The current chair of the SBI is Ms. Clarissa Souza Della Nina of Brazil.
Parties
As of 2016, the Convention has 196 Parties, which includes 195 states and the European Union. All UN member states—with the exception of the United States—have ratified the treaty. Non-UN member states that have ratified are the Cook Islands, Niue, and the State of Palestine. The Holy See and the states with limited recognition are non-Parties. The US has signed but not ratified the treaty, because ratification requires a two-thirds majority in the Senate and is blocked by Republican Party senators.
The European Union created the Cartagena Protocol (see below) in 2000 to enhance biosafety regulation and propagate the "precautionary principle" over the "sound science principle" defended by the United States. Whereas the impact of the Cartagena Protocol on domestic regulations has been substantial, its impact on international trade law remains uncertain. In 2006, the World Trade Organization (WTO) ruled that the European Union had violated international trade law between 1999 and 2003 by imposing a moratorium on the approval of genetically modified organisms (GMO) imports. Disappointing the United States, the panel nevertheless "decided not to decide" by not invalidating the stringent European biosafety regulations.
Implementation by the Parties to the Convention is achieved using two means:
National Biodiversity Strategies and Action Plans (NBSAP)
National Biodiversity Strategies and Action Plans (NBSAP) are the principal instruments for implementing the Convention at the national level. The Convention requires that countries prepare a national biodiversity strategy and to ensure that this strategy is included in planning for activities in all sectors where diversity may be impacted. As of early 2012, 173 Parties had developed NBSAPs.
The United Kingdom, New Zealand and Tanzania carried out elaborate responses to conserve individual species and specific habitats. The United States of America, a signatory who had not yet ratified the treaty by 2010, produced one of the most thorough implementation programs through species recovery programs and other mechanisms long in place in the US for species conservation.
Singapore established a detailed National Biodiversity Strategy and Action Plan. The National Biodiversity Centre of Singapore represents Singapore in the Convention for Biological Diversity.
National Reports
In accordance with Article 26 of the Convention, Parties prepare national reports on the status of implementation of the Convention.
Protocols and plans developed by CBD
Cartagena Protocol (2000)
The Cartagena Protocol on Biosafety, also known as the Biosafety Protocol, was adopted in January 2000, after a CBD Open-ended Ad Hoc Working Group on Biosafety had met six times between July 1996 and February 1999. The Working Group submitted a draft text of the Protocol for consideration by Conference of the Parties at its first extraordinary meeting, which was convened for the express purpose of adopting a protocol on biosafety to the Convention on Biological Diversity. After a few delays, the Cartagena Protocol was eventually adopted on 29 January 2000. The Biosafety Protocol seeks to protect biological diversity from the potential risks posed by living modified organisms resulting from modern biotechnology.
The Biosafety Protocol makes clear that products from new technologies must be based on the precautionary principle and allow developing nations to balance public health against economic benefits. It will, for example, let countries ban imports of a genetically modified organism if they feel there is not enough scientific evidence the product is safe and requires exporters to label shipments containing genetically modified commodities such as corn or cotton.
The required number of 50 instruments of ratification/accession/approval/acceptance by countries was reached in May 2003. In accordance with the provisions of its Article 37, the Protocol entered into force on 11 September 2003.
Global Strategy for Plant Conservation (2002)
In April 2002, the Parties of the UN CBD adopted the recommendations of the Gran Canaria Declaration Calling for a Global Plant Conservation Strategy, and adopted a 16-point plan aiming to slow the rate of plant extinctions around the world by 2010.
Nagoya Protocol (2010)
The Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity was adopted on 29 October 2010 in Nagoya, Aichi Prefecture, Japan, at the tenth meeting of the Conference of the Parties, and entered into force on 12 October 2014. The protocol is a supplementary agreement to the Convention on Biological Diversity, and provides a transparent legal framework for the effective implementation of one of the three objectives of the CBD: the fair and equitable sharing of benefits arising out of the utilization of genetic resources. It thereby contributes to the conservation and sustainable use of biodiversity.
Strategic Plan for Biodiversity 2011–2020
Also at the tenth meeting of the Conference of the Parties, held from 18 to 29 October 2010 in Nagoya, a revised and updated "Strategic Plan for Biodiversity, 2011–2020" was agreed and published. This document included the "Aichi Biodiversity Targets", comprising 20 targets that address each of five strategic goals defined in the plan. The strategic plan includes the following strategic goals:
Strategic Goal A: Address the underlying causes of biodiversity loss by mainstreaming biodiversity across government and society
Strategic Goal B: Reduce the direct pressures on biodiversity and promote sustainable use
Strategic Goal C: To improve the status of biodiversity by safeguarding ecosystems, species and genetic diversity
Strategic Goal D: Enhance the benefits to all from biodiversity and ecosystem services
Strategic Goal E: Enhance implementation through participatory planning, knowledge management and capacity building
Upon the launch of Agenda 2030, CBD released a technical note mapping and identifying synergies between the 17 Sustainable Development Goals (SDGs) and the 20 Aichi Biodiversity Targets. This helps to understand the contributions of biodiversity to achieving the SDGs.
Post-2020 Global Biodiversity Framework
A new plan, known as the post-2020 Global Biodiversity Framework (GBF) was developed to guide action through 2030. A first draft of this framework was released in July 2021, and its final content was discussed and negotiated as part of the COP 15 meetings. Reducing agricultural pollution and sharing the benefits of digital sequence information arose as key points of contention among Parties during development of the framework. A final version was adopted by the Convention on 19 December 2022. The framework includes a number of ambitious goals, including a commitment to designate at least 30 percent of global land and sea as protected areas (known as the "30 by 30" initiative).
Marine and coastal biodiversity
The CBD has a significant focus on marine and coastal biodiversity. A series of expert workshops have been held (2018–2022) to identify options for modifying the description of Ecologically or Biologically Significant Marine Areas (EBSAs) and describing new areas. These have focused on the North-East, North-West and South-Eastern Atlantic Ocean, Baltic Sea, Caspian Sea, Black Sea, Seas of East Asia, North-West Indian Ocean and Adjacent Gulf Areas, Southern and North-East Indian Ocean, Mediterranean Sea, North and South Pacific, Eastern Tropical and Temperate Pacific, Wider Caribbean and Western Mid-Atlantic. The workshop meetings have followed the EBSA process based on internationally agreed scientific criteria. This is aimed at creating an international legally binding instrument (ILBI) under UNCLOS to support the conservation and sustainable use of marine biological diversity beyond areas of national jurisdiction (BBNJ or High Seas Treaty). The central mechanism is area-based planning and decision-making. It integrates EBSAs, Vulnerable Marine Ecosystems (VMEs) and High Seas (Marine Protected Areas) with Blue Growth scenarios. There is also linkage with the EU Marine Strategy Framework Directive.
Criticism
There have been criticisms against CBD that its implementation has been weakened due to resistance of Western countries to the implementation of pro-South provisions of the Convention. CBD is also regarded as a case of a hard treaty gone soft in the implementation trajectory. The argument to enforce the treaty as a legally binding multilateral instrument with the Conference of Parties reviewing the infractions and non-compliance is also gaining strength.
Although the Convention explicitly states that all forms of life are covered by its provisions, examination of reports and of national biodiversity strategies and action plans submitted by participating countries shows that in practice this is not happening. The fifth report of the European Union, for example, makes frequent reference to animals (particularly fish) and plants, but does not mention bacteria, fungi or protists at all. The International Society for Fungal Conservation has assessed more than 100 of these CBD documents for their coverage of fungi using defined criteria to place each in one of six categories. No documents were assessed as good or adequate, less than 10% as nearly adequate or poor, and the rest as deficient, seriously deficient or totally deficient.
Scientists working with biodiversity and medical research are expressing fears that the Nagoya Protocol is counterproductive, and will hamper disease prevention and conservation efforts, and that the threat of imprisonment of scientists will have a chilling effect on research. Non-commercial researchers and institutions such as natural history museums fear maintaining biological reference collections and exchanging material between institutions will become difficult, and medical researchers have expressed alarm at plans to expand the protocol to make it illegal to publicly share genetic information, e.g. via GenBank.
William Yancey Brown, when with the Brookings Institution, suggested that the Convention on Biological Diversity should include the preservation of intact genomes and viable cells for every known species and for new species as they are discovered.
Meetings of the Parties
A Conference of the Parties (COP) was held annually for three years after 1994, and thence biennially on even-numbered years.
{| class="wikitable sortable"
|-
! style="text-align:center" | COP
! style="text-align:center" | Year
! style="text-align:left" | Country
! style="text-align:center" | Begin
! style="text-align:center" | End
! style="text-align:center" | Days
! style="text-align:left" | City
! style="text-align:left" | Link
|-
| style="text-align:center"| 1 || style="text-align:center"| 1994 || || style="text-align:center"| 28.11.1994 || style="text-align:center"| 09.12.1994 || style="text-align:center"| 12 || Nassau || COP 1
|-
| style="text-align:center"| 2 || style="text-align:center"| 1995 || || style="text-align:center"| 06.11.1995 || style="text-align:center"| 17.11.1995 || style="text-align:center"| 12 || Jakarta || COP 2
|-
| style="text-align:center"| 3 || style="text-align:center"| 1996 || || style="text-align:center"| 04.11.1996 || style="text-align:center"| 15.11.1996 || style="text-align:center"| 12 || Buenos Aires || COP 3
|-
| style="text-align:center"| 4 || style="text-align:center"| 1998 || || style="text-align:center"| 04.05.1998 || style="text-align:center"| 15.05.1998 || style="text-align:center"| 12 || Bratislava || COP 4
|-
| style="text-align:center"| 5 || style="text-align:center"| 2000 || || style="text-align:center"| 15.05.2000 || style="text-align:center"| 26.05.2000 || style="text-align:center"| 12 || Nairobi || COP 5
|-
| style="text-align:center"| 6 || style="text-align:center"| 2002 || || style="text-align:center"| 07.04.2002 || style="text-align:center"| 19.04.2002 || style="text-align:center"| 13 || The Hague || COP 6
|-
| style="text-align:center"| 7 || style="text-align:center"| 2004 || || style="text-align:center"| 09.02.2004 || style="text-align:center"| 20.02.2004 || style="text-align:center"| 12 || Kuala Lumpur || COP 7
|-
| style="text-align:center"| 8 || style="text-align:center"| 2006 || || style="text-align:center"| 20.03.2006 || style="text-align:center"| 31.03.2006 || style="text-align:center"| 12 || Curitiba || COP 8
|-
| style="text-align:center"| 9 || style="text-align:center"| 2008 || || style="text-align:center"| 19.05.2008 || style="text-align:center"| 30.05.2008 || style="text-align:center"| 12 || Bonn || COP 9
|-
| style="text-align:center"| 10 || style="text-align:center"| 2010 || || style="text-align:center"| 18.10.2010 || style="text-align:center"| 29.10.2010 || style="text-align:center"| 12 || Nagoya || COP 10
|-
| style="text-align:center"| 11 || style="text-align:center"| 2012 || || style="text-align:center"| 08.10.2012 || style="text-align:center"| 19.10.2012 || style="text-align:center"| 12 || Hyderabad || COP 11
|-
| style="text-align:center"| 12 || style="text-align:center"| 2014 || || style="text-align:center"| 06.10.2014 || style="text-align:center"| 17.10.2014 || style="text-align:center"| 12 || Pyeongchang || COP 12
|-
| style="text-align:center"| 13 || style="text-align:center"| 2016 || || style="text-align:center"| 04.12.2016 || style="text-align:center"| 17.12.2016 || style="text-align:center"| 14 || Cancun || COP 13
|-
| style="text-align:center"| 14 || style="text-align:center"| 2018 || || style="text-align:center"| 13.11.2018 || style="text-align:center"| 29.11.2018 || style="text-align:center"| 17 || Sharm El Sheik || COP 14
|-
| style="text-align:center"| 15 || style="text-align:center"| 2022 || || style="text-align:center"| 07.12.2022 || style="text-align:center"| 19.12.2022 || style="text-align:center"| 13 || Montreal || COP 15
|-
| style="text-align:center"| 16 || style="text-align:center"| 2024 || || style="text-align:center"| 21.10.2024 || style="text-align:center"| 01.11.2024 || style="text-align:center"| 12 || Cali || COP 16
|}
1994 COP 1
The first ordinary meeting of the Parties to the Convention took place in November and December 1994, in Nassau, Bahamas. The International Coral Reef Initiative (ICRI) was launched at this first COP for the Convention on Biological Diversity.
1995 COP 2
The second ordinary meeting of the Parties to the Convention took place in November 1995, in Jakarta, Indonesia.
1996 COP 3
The third ordinary meeting of the Parties to the Convention took place in November 1996, in Buenos Aires, Argentina.
1998 COP 4
The fourth ordinary meeting of the Parties to the Convention took place in May 1998, in Bratislava, Slovakia.
1999 EX-COP 1 (Cartagena)
The First Extraordinary Meeting of the Conference of the Parties took place in February 1999, in Cartagena, Colombia. A series of meetings led to the adoption of the Cartagena Protocol on Biosafety in January 2000, effective from 2003.
2000 COP 5
The fifth ordinary meeting of the Parties to the Convention took place in May 2000, in Nairobi, Kenya.
2002 COP 6
The sixth ordinary meeting of the Parties to the Convention took place in April 2002, in The Hague, Netherlands.
2004 COP 7
The seventh ordinary meeting of the Parties to the Convention took place in February 2004, in Kuala Lumpur, Malaysia.
2006 COP 8
The eighth ordinary meeting of the Parties to the Convention took place in March 2006, in Curitiba, Brazil.
2008 COP 9
The ninth ordinary meeting of the Parties to the Convention took place in May 2008, in Bonn, Germany.
2010 COP 10 (Nagoya)
The tenth ordinary meeting of the Parties to the Convention took place in October 2010, in Nagoya, Japan. It was at this meeting that the Nagoya Protocol was ratified.
2010 was the International Year of Biodiversity, which resulted in 110 reports on the loss of biodiversity in different countries, but little or no progress toward the goal of "significant reduction" in the problem. Following a recommendation of CBD signatories, the UN declared 2011 to 2020 as the United Nations Decade on Biodiversity.
2012 COP 11
Leading up to the Conference of the Parties (COP 11) meeting on biodiversity in Hyderabad, India, 2012, preparations for a World Wide Views on Biodiversity has begun, involving old and new partners and building on the experiences from the World Wide Views on Global Warming.
2014 COP 12
Under the theme, "Biodiversity for Sustainable Development", thousands of representatives of governments, NGOs, indigenous peoples, scientists and the private sector gathered in Pyeongchang, Republic of Korea in October 2014 for the 12th meeting of the Conference of the Parties to the Convention on Biological Diversity (COP 12).
From 6–17 October 2014, Parties discussed the implementation of the Strategic Plan for Biodiversity 2011–2020 and its Aichi Biodiversity Targets, which are to be achieved by the end of this decade. The results of Global Biodiversity Outlook 4, the flagship assessment report of the CBD informed the discussions.
The conference gave a mid-term evaluation to the UN Decade on Biodiversity (2011–2020) initiative, which aims to promote the conservation and sustainable use of nature. The meeting achieved a total of 35 decisions, including a decision on "Mainstreaming gender considerations", to incorporate gender perspective to the analysis of biodiversity.
At the end of the meeting, the meeting adopted the "Pyeongchang Road Map", which addresses ways to achieve biodiversity through technology cooperation, funding and strengthening the capacity of developing countries.
2016 COP 13
The thirteenth ordinary meeting of the Parties to the Convention took place between 2 and 17 December 2016 in Cancún, Mexico.
2018 COP 14
The 14th ordinary meeting of the Parties to the Convention took place on 17–29 November 2018, in Sharm El-Sheikh, Egypt. The 2018 UN Biodiversity Conference closed on 29 November 2018 with broad international agreement on reversing the global destruction of nature and biodiversity loss threatening all forms of life on Earth. Parties adopted the Voluntary Guidelines for the design and effective implementation of ecosystem-based approaches to climate change adaptation and disaster risk reduction. Governments also agreed to accelerate action to achieve the Aichi Biodiversity Targets, agreed in 2010, until 2020. Work to achieve these targets would take place at the global, regional, national and subnational levels.
2021/2022 COP 15
The 15th meeting of the Parties was originally scheduled to take place in Kunming, China in 2020, but was postponed several times due to the COVID-19 pandemic. After the start date was delayed for a third time, the Convention was split into two sessions. A mostly online event took place in October 2021, where over 100 nations signed the Kunming declaration on biodiversity. The theme of the declaration was "Ecological Civilization: Building a Shared Future for All Life on Earth". Twenty-one action-oriented draft targets were provisionally agreed in the October meeting, to be further discussed in the second session: an in-person event that was originally scheduled to start in April 2022, but was rescheduled to occur later in 2022. The second part of COP 15 ultimately took place in Montreal, Canada, from 5–17 December 2022. At the meeting, the Parties to the Convention adopted a new action plan, the Kunming-Montreal Global Biodiversity Framework.
2024 COP 16
The 16th meeting of the Parties is scheduled to be held in Cali, Colombia in 2024. Originally, Turkey was going to host it but after a series of earthquakes in February 2023 they had to withdraw.
See also
2010 Biodiversity Indicators Partnership
2010 Biodiversity Target
30 by 30
Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs)
Biodiversity banking
Biological Diversity Act, 2002
Biopiracy
Bioprospecting
Biosphere Reserve
Convention on the Conservation of Migratory Species of Wild Animals
Convention on the International Trade in Endangered Species of Wild Flora and Fauna
Convention on Wetlands of International Importance, especially as Waterfowl Habitat
Ecotourism
Endangered species
Endangered Species Recovery Plan
Environmental agreements
Environmental Modification Convention, another ban on weather modification / climate engineering.
Globally Important Agricultural Heritage Systems (GIAHS)
Green Development Initiative (GDI)
Holocene extinction
Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services
International Cooperative Biodiversity Groups
International Organization for Biological Control
International Treaty on Plant Genetic Resources for Food and Agriculture
International Day for Biological Diversity
International Year of Biodiversity
Kunming-Montreal Global Biodiversity Framework
Migratory Bird Treaty Act of 1918
Red Data Book of Singapore
Red Data Book of the Russian Federation
Satoyama
Sustainable forest management
United Nations Convention to Combat Desertification
United Nations Decade on Biodiversity
United Nations Framework Convention on Climate Change
World Conservation Monitoring Centre
References
Further reading
Davis, K. 2008. A CBD manual for botanic gardens English version, Italian version Botanic Gardens Conservation International (BGCI)
External links
The Convention on Biological Diversity (CBD) website
Text of the Convention from CBD website
Ratifications at depositary
Case studies on the implementation of the Convention from BGCI website with links to relevant articles
Introductory note by Laurence Boisson de Chazournes, procedural history note and audiovisual material on the Convention on Biological Diversity in the Historic Archives of the United Nations Audiovisual Library of International Law
C
Convention
United Nations treaties
Treaties concluded in 1992
Treaties entered into force in 1993
.Convention on Biodiversity
.Convention on Biodiversity
Convention on Biodiversity
Convention on Biodiversity
Convention on Biodiversity
Convention on Biodiversity
Animal treaties
Convention
Convention
Convention
Convention on Biodiversity
Convention on Biodiversity
Treaties entered into by the European Union
Treaties of the Afghan Transitional Administration
Treaties of Albania
Treaties of Algeria
Treaties of Andorra
Treaties of Angola
Treaties of Antigua and Barbuda
Treaties of Argentina
Treaties of Armenia
Treaties of Australia
Treaties of Austria
Treaties of Azerbaijan
Treaties of the Bahamas
Treaties of Bahrain
Treaties of Bangladesh
Treaties of Barbados
Treaties of Belarus
Treaties of Belgium
Treaties of Belize
Treaties of Benin
Treaties of Bhutan
Treaties of Bolivia
Treaties of Bosnia and Herzegovina
Treaties of Botswana
Treaties of Brazil
Convention on Biodiversity
Treaties of Brunei
Treaties of Bulgaria
Treaties of Burkina Faso
Treaties of Burundi
Treaties of Cambodia
Treaties of Cameroon
Treaties of Canada
Treaties of Cape Verde
Treaties of the Central African Republic
Treaties of Chad
Treaties of Chile
Treaties of the People's Republic of China
Treaties of Colombia
Treaties of the Comoros
Treaties of the Republic of the Congo
Treaties of the Cook Islands
Treaties of Costa Rica
Treaties of Ivory Coast
Treaties of Croatia
Treaties of Cuba
Treaties of Cyprus
Treaties of the Czech Republic
Treaties of North Korea
Treaties of Zaire
Treaties of Denmark
Treaties of Djibouti
Treaties of Dominica
Treaties of the Dominican Republic
Treaties of Ecuador
Treaties of Egypt
Treaties of El Salvador
Treaties of Equatorial Guinea
Treaties of Eritrea
Treaties of Estonia
Treaties of Eswatini
Treaties of the Transitional Government of Ethiopia
Treaties of Fiji
Treaties of Finland
Treaties of France
Treaties of Gabon
Treaties of the Gambia
Treaties of Georgia (country)
Treaties of Germany
Treaties of Ghana
Treaties of Greece
Treaties of Grenada
Treaties of Guatemala
Treaties of Guinea
Treaties of Guinea-Bissau
Treaties of Guyana
Treaties of Haiti
Treaties of Honduras
Treaties of Hungary
Treaties of Iceland
Treaties of India
Treaties of Indonesia
Treaties of Iran
Treaties of Iraq
Treaties of Ireland
Treaties of Israel
Treaties of Italy
Treaties of Jamaica
Treaties of Japan
Treaties of Jordan
Treaties of Kazakhstan
Treaties of Kenya
Treaties of Kiribati
Treaties of Kuwait
Treaties of Kyrgyzstan
Treaties of Laos
Treaties of Latvia
Treaties of Lebanon
Treaties of Lesotho
Treaties of Liberia
Treaties of the Libyan Arab Jamahiriya
Treaties of Liechtenstein
Treaties of Lithuania
Treaties of Luxembourg
Treaties of Madagascar
Treaties of Malawi
Treaties of Malaysia
Treaties of the Maldives
Treaties of Mali
Treaties of Malta
Treaties of the Marshall Islands
Treaties of Mauritania
Treaties of Mauritius
Treaties of Mexico
Treaties of the Federated States of Micronesia
Treaties of Moldova
Treaties of Monaco
Treaties of Mongolia
Treaties of Montenegro
Treaties of Morocco
Treaties of Mozambique
Treaties of Myanmar
Treaties of Namibia
Treaties of Nauru
Treaties of Nepal
Treaties of the Netherlands
Treaties of New Zealand
Treaties of Nicaragua
Treaties of Niger
Treaties of Nigeria
Treaties of Niue
Treaties of North Macedonia
Treaties of Norway
Treaties of Oman
Treaties of Pakistan
Treaties of Palau
Treaties of the State of Palestine
Treaties of Panama
Treaties of Papua New Guinea
Treaties of Paraguay
Treaties of Peru
Treaties of the Philippines
Treaties of Poland
Treaties of Portugal
Treaties of Qatar
Treaties of Romania
Treaties of Russia
Treaties of Rwanda
Treaties of Saint Kitts and Nevis
Treaties of Saint Lucia
Treaties of Saint Vincent and the Grenadines
Treaties of Samoa
Treaties of San Marino
Treaties of São Tomé and Príncipe
Treaties of Saudi Arabia
Treaties of Senegal
Treaties of Serbia and Montenegro
Treaties of Seychelles
Treaties of Sierra Leone
Treaties of Singapore
Treaties of Slovakia
Treaties of Slovenia
Treaties of the Solomon Islands
Treaties of the Transitional Federal Government of Somalia
Treaties of South Africa
Treaties of South Korea
Treaties of South Sudan
Treaties of Spain
Treaties of Sri Lanka
Treaties of the Republic of the Sudan (1985–2011)
Treaties of Suriname
Treaties of Sweden
Treaties of Switzerland
Treaties of Syria
Treaties of Tajikistan
Treaties of Thailand
Treaties of Timor-Leste
Treaties of Togo
Treaties of Tonga
Treaties of Trinidad and Tobago
Treaties of Tunisia
Treaties of Turkey
Treaties of Turkmenistan
Treaties of Tuvalu
Treaties of Uganda
Treaties of Ukraine
Treaties of the United Arab Emirates
Treaties of the United Kingdom
Treaties of the United States
Treaties of Tanzania
Treaties of Uruguay
Treaties of Uzbekistan
Treaties of Vanuatu
Treaties of Venezuela
Treaties of Vietnam
Treaties of Yemen
Treaties of Zambia
Treaties of Zimbabwe
Treaties extended to Aruba
Treaties extended to the Netherlands Antilles
Treaties extended to Jersey
Treaties extended to the British Virgin Islands
Treaties extended to the Cayman Islands
Treaties extended to Gibraltar
Treaties extended to Saint Helena, Ascension and Tristan da Cunha
Treaties extended to the Isle of Man
Treaties extended to Greenland
Treaties extended to the Faroe Islands
Treaties extended to Portuguese Macau
Treaties extended to Hong Kong
Treaties extended to the Falkland Islands
Treaties extended to South Georgia and the South Sandwich Islands
Anti-biopiracy treaties
Biopiracy | Convention on Biological Diversity | [
"Biology"
] | 7,563 | [
"Anti-biopiracy treaties",
"Convention on Biological Diversity",
"Biodiversity",
"Biopiracy"
] |
6,201 | https://en.wikipedia.org/wiki/CITES | CITES (shorter name for the Convention on International Trade in Endangered Species of Wild Fauna and Flora, also known as the Washington Convention) is a multilateral treaty to protect endangered plants and animals from the threats of international trade. It was drafted as a result of a resolution adopted in 1963 at a meeting of members of the International Union for Conservation of Nature (IUCN). The convention was opened for signature in 1973 and CITES entered into force on 1 July 1975.
Its aim is to ensure that international trade (import/export) in specimens of animals and plants included under CITES does not threaten the survival of the species in the wild. This is achieved via a system of permits and certificates. CITES affords varying degrees of protection to more than 40,900 species.
, the Secretary-General of CITES is Ivonne Higuero.
Background
CITES is one of the largest and oldest conservation and sustainable use agreements in existence. There are three working languages of the Convention (English, French and Spanish) in which all documents are made available. Participation is voluntary and countries that have agreed to be bound by the convention are known as Parties. Although CITES is legally binding on the Parties, it does not take the place of national laws. Rather it provides a framework respected by each Party, which must adopt their own domestic legislation to implement CITES at the national level.
Originally, CITES addressed depletion resulting from demand for luxury goods such as furs in Western countries, but with the rising wealth of Asia, particularly in China, the focus changed to products demanded there, particularly those used for luxury goods such as elephant ivory or rhinoceros horn. As of 2022, CITES has expanded to include thousands of species previously considered unremarkable and in no danger of extinction such as manta rays or pangolins.
Ratifications
The text of the convention was finalized at a meeting of representatives of 80 countries in Washington, D.C., United States, on 3 March 1973. It was then open for signature until 31 December 1974. It entered into force after the 10th ratification by a signatory country, on 1 July 1975. Countries that signed the Convention become Parties by ratifying, accepting or approving it. By the end of 2003, all signatory countries had become Parties. States that were not signatories may become Parties by acceding to the convention. , the convention has 184 parties, including 183 states and the European Union.
The CITES Convention includes provisions and rules for trade with non-Parties. All member states of the United Nations are party to the treaty, with the exception of North Korea, Federated States of Micronesia, Haiti, Kiribati, Marshall Islands, Nauru, South Sudan, East Timor, Turkmenistan, and Tuvalu. UN observer the Holy See is also not a member. The Faroe Islands, an autonomous region in the Kingdom of Denmark, is also treated as a non-Party to CITES (both the Danish mainland and Greenland are part of CITES).
An amendment to the text of the convention, known as the Gaborone Amendment allows regional economic integration organizations (REIO), such as the European Union, to have the status of a member state and to be a Party to the convention. The REIO can vote at CITES meetings with the number of votes representing the number of members in the REIO, but it does not have an additional vote.
In accordance with Article XVII, paragraph 3, of the CITES Convention, the Gaborone Amendment entered into force on 29 November 2013, 60 days after 54 (two-thirds) of the 80 States that were party to CITES on 30 April 1983 deposited their instrument of acceptance of the amendment. At that time it entered into force only for those States that had accepted the amendment. The amended text of the convention will apply automatically to any State that becomes a Party after 29 November 2013. For States that became party to the convention before that date and have not accepted the amendment, it will enter into force 60 days after they accept it.
Governing Structure of CITES
CITES operates to support the member Parties. This support consists of the input from three Committees (Standing, Animal and Plant) who are overseen by the Secretary-General. The secretariat position has been held by a variety of people from different nations.
Timeline of CITES Secretary-General Offices
1978-1981: Peter H. Sand
He was born in Bavaria, Germany and was educated in international law in Germany, France and Canada. He became a professor and an author, focusing on environmental law, holding other positions such as the Director-General of the IUCN and legal advisor for environmental affairs to the World Bank.
1982-1990: Eugene Lapointe
A Canadian native, Lapointe served in the military for many years and acted as a diplomat before governing the CITES. He is currently an author and holds the position of President of the IWMC World Conservation Trust, a non-profit organization that promotes wildlife conservation with an emphasis on a human-centered approach to natural-resources.
1991 - 1998: Izgrev Topkov
Born and raised in Bulgaria, Topkov was a diplomat before managing CITES, and was removed from that position following the misuse of permits that violated CITES guidelines.
1999 - 2010: Willem Wijnstekers
A native of the Netherlands and graduate from the University of Amsterdam, Wijnsteker held the position of Secretary-General for the longest period of time and is now an author.
2010 - 2018: John E. Scanlon
An Australian who studied environmental law, Scanlon was active in combating illegal animal trade and currently works in the effort to protect Elephants in Africa as CEO of the Elephant Protection Initiative Foundation (EPIF).
2018- Current: Ivonne Higuero
The first woman to hold this position, Higuero was educated in environmental economics and is from Panama.
Regulation of trade
CITES works by subjecting international trade in specimens of listed taxa to controls as they move across international borders. CITES specimens can include a wide range of items including the whole animal/plant (whether alive or dead), or a product that contains a part or derivative of the listed taxa such as cosmetics or traditional medicines.
Four types of trade are recognized by CITES - import, export, re-export (export of any specimen that has previously been imported) and introduction from the sea (transportation into a state of specimens of any species which were taken in the marine environment not under the jurisdiction of any state). The CITES definition of "trade" does not require a financial transaction to be occurring. All trade in specimens of species covered by CITES must be authorized through a system of permits and certificates prior to the trade taking place. CITES permits and certificates are issued by one or more Management Authorities in charge of administering the CITES system in each country. Management Authorities are advised by one or more Scientific Authorities on the effects of trade of the specimen on the status of CITES-listed species. CITES permits and certificates must be presented to relevant border authorities in each country in order to authorize the trade.
Each party must enact their own domestic legislation to bring the provisions of CITES into effect in their territories. Parties may choose to take stricter domestic measures than CITES provides (for example by requiring permits/certificates in cases where they would not normally be needed or by prohibiting trade in some specimens).
Appendices
Over 40,900 species, subspecies and populations are protected under CITES. Each protected taxa or population is included in one of three lists called Appendices. The Appendix that lists a taxon or population reflects the level of the threat posed by international trade and the CITES controls that apply.
Taxa may be split-listed meaning that some populations of a species are on one Appendix, while some are on another. The African bush elephant (Loxodonta africana) is currently split-listed, with all populations except those of Botswana, Namibia, South Africa and Zimbabwe listed in Appendix I. Those of Botswana, Namibia, South Africa and Zimbabwe are listed in Appendix II. There are also species that have only some populations listed in an Appendix. One example is the pronghorn (Antilocapra americana), a ruminant native to North America. Its Mexican population is listed in Appendix I, but its U.S. and Canadian populations are not listed (though certain U.S. populations in Arizona are nonetheless protected under other domestic legislation, in this case the Endangered Species Act).
Taxa are proposed for inclusion, amendment or deletion in Appendices I and II at meetings of the Conference of the Parties (CoP), which are held approximately once every three years. Amendments to listing in Appendix III may be made unilaterally by individual parties.
Appendix I
Appendix I taxa are those that are threatened with extinction and to which the highest level of CITES protection is afforded. Commercial trade in wild-sourced specimens of these taxa is not permitted and non-commercial trade is strictly controlled by requiring an import permit and export permit to be granted by the relevant Management Authorities in each country before the trade occurs.
Notable taxa listed in Appendix I include the red panda (Ailurus fulgens), western gorilla (Gorilla gorilla), the chimpanzee species (Pan spp.), tigers (Panthera tigris sp.), Asian elephant (Elephas maximus), snow leopard (Panthera uncia), red-shanked douc (Pygathrix nemaeus), some populations of African bush elephant (Loxodonta africana), and the monkey puzzle tree (Araucaria araucana).
Appendix II
Appendix II taxa are those that are not necessarily threatened with extinction, but trade must be controlled in order to avoid utilization incompatible with their survival. Appendix II taxa may also include species similar in appearance to species already listed in the Appendices. The vast majority of taxa listed under CITES are listed in Appendix II. Any trade in Appendix II taxa standardly requires a CITES export permit or re-export certificate to be granted by the Management Authority of the exporting country before the trade occurs.
Examples of taxa listed on Appendix II are the great white shark (Carcharodon carcharias), the American black bear (Ursus americanus), Hartmann's mountain zebra (Equus zebra hartmannae), green iguana (Iguana iguana), queen conch (Strombus gigas), emperor scorpion (Pandinus imperator), Mertens' water monitor (Varanus mertensi), bigleaf mahogany (Swietenia macrophylla), lignum vitae (Guaiacum officinale), the chambered nautilus (Nautilus pompilius), all stony corals (Scleractinia spp.), Jungle cat (Felis chaus) and American ginseng (Panax quinquefolius).
Appendix III
Appendix III species are those that are protected in at least one country, and that country has asked other CITES Parties for assistance in controlling the trade.
Any trade in Appendix III species standardly requires a CITES export permit (if sourced from the country that listed the species) or a certificate of origin (from any other country) to be granted before the trade occurs.
Examples of species listed on Appendix III and the countries that listed them are the Hoffmann's two-toed sloth (Choloepus hoffmanni) by Costa Rica, sitatunga (Tragelaphus spekii) by Ghana and African civet (Civettictis civetta) by Botswana.
Exemptions and special procedures
Under Article VII, the Convention allows for certain exceptions to the general trade requirements described above.
Pre-Convention specimens
CITES provides for a special process for specimens that were acquired before the provisions of the Convention applied to that specimen. These are known as "pre-Convention" specimens and must be granted a CITES pre-Convention certificate before the trade occurs. Only specimens legally acquired before the date on which the species concerned was first included in the Appendices qualify for this exemption.
Personal and household effects
CITES provides that the standard permit/certificate requirements for trade in CITES specimens do not generally apply if a specimen is a personal or household effect. However there are a number of situations where permits/certificates for personal or household effects are required and some countries choose to take stricter domestic measures by requiring permits/certificates for some or all personal or household effects.
Captive bred or artificially propagated specimens
CITES allows trade in specimens to follow special procedures if Management Authorities are satisfied that they are sourced from captive bred animals or artificially propagated plants.
In the case of commercial trade of Appendix I taxa, captive bred or artificially propagated specimens may be traded as if they were Appendix II. This reduces the permit requirements from two permits (import/export) to one (export only).
In the case of non-commercial trade, specimens may be traded with a certificate of captive breeding/artificial propagation issued by the Management Authority of the state of export in lieu of standard permits.
Scientific exchange
Standard CITES permit and certificates are not required for the non-commercial loan, donation or exchange between scientific or forensic institutions that have been registered by a Management Authority of their State. Consignments containing the specimens must carry a label issued or approved by that Management Authority (in some cases Customs Declaration labels may be used). Specimens that may be included under this provision include museum, herbarium, diagnostic and forensic research specimens. Registered institutions are listed on the CITES website.
Amendments and reservations
Amendments to the Convention must be supported by a two-thirds majority who are "present and voting" and can be made during an extraordinary meeting of the COP if one-third of the Parties are interested in such a meeting. The Gaborone Amendment (1983) allows regional economic blocs to accede to the treaty. Trade with non-Party states is allowed, although permits and certificates are recommended to be issued by exporters and sought by importers.
Species in the Appendices may be proposed for addition, change of Appendix, or de-listing (i.e., deletion) by any Party, whether or not it is a range State and changes may be made despite objections by range States if there is sufficient (2/3 majority) support for the listing. Species listings are made at the Conference of Parties.
Upon acceding to the Convention or within 90 days of a species listing being amended, Parties may make reservations. In these cases, the party is treated as being a state that is not a Party to CITES with respect to trade in the species concerned. Notable reservations include those by Iceland, Japan, and Norway on various baleen whale species and those on Falconiformes by Saudi Arabia.
Shortcomings and concerns
Implementation
As of 2002, 50% of Parties lacked one or more of the four major CITES requirements - designation of Management and Scientific Authorities; laws prohibiting the trade in violation of CITES; penalties for such trade and laws providing for the confiscation of specimens.
Although the Convention itself does not provide for arbitration or dispute in the case of noncompliance, 36 years of CITES in practice has resulted in several strategies to deal with infractions by Parties. The Secretariat, when informed of an infraction by a Party, will notify all other parties. The Secretariat will give the Party time to respond to the allegations and may provide technical assistance to prevent further infractions. Other actions the Convention itself does not provide for but that derive from subsequent COP resolutions may be taken against the offending Party. These include:
Mandatory confirmation of all permits by the Secretariat
Suspension of cooperation from the Secretariat
A formal warning
A visit by the Secretariat to verify capacity
Recommendations to all Parties to suspend CITES related trade with the offending party
Dictation of corrective measures to be taken by the offending Party before the Secretariat will resume cooperation or recommend resumption of trade
Bilateral sanctions have been imposed on the basis of national legislation (e.g. the USA used certification under the Pelly Amendment to get Japan to revoke its reservation to hawksbill turtle products in 1991, thus reducing the volume of its exports).
Infractions may include negligence with respect to permit issuing, excessive trade, lax enforcement, and failing to produce annual reports (the most common).
Approach to biodiversity conservation
General limitations about the structure and philosophy of CITES include: by design and intent it focuses on trade at the species level and does not address habitat loss, ecosystem approaches to conservation, or poverty; it seeks to prevent unsustainable use rather than promote sustainable use (which generally conflicts with the Convention on Biological Diversity), although this has been changing (see Nile crocodile, African elephant, South African white rhino case studies in Hutton and Dickinson 2000). It does not explicitly address market demand. In fact, CITES listings have been demonstrated to increase financial speculation in certain markets for high value species. Funding does not provide for increased on-the-ground enforcement (it must apply for bilateral aid for most projects of this nature).
There has been increasing willingness within the Parties to allow for trade in products from well-managed populations. For instance, sales of the South African white rhino have generated revenues that helped pay for protection. Listing the species on Appendix I increased the price of rhino horn (which fueled more poaching), but the species survived wherever there was adequate on-the-ground protection. Thus field protection may be the primary mechanism that saved the population, but it is likely that field protection would not have been increased without CITES protection. In another instance, the United States initially stopped exports of bobcat and lynx hides in 1977 when it first implemented CITES for lack of data to support no detriment findings. However, in this Federal Register notice, issued by William Yancey Brown, the U.S. Endangered Species Scientific Authority (ESSA) established a framework of no detriment findings for each state and the Navajo nation and indicated that approval would be forthcoming if the states and Navajo nation provided evidence that their furbearer management programs assured the species would be conserved. Management programs for these species expanded rapidly, including tagging for export, and are currently recognized in program approvals under regulations of the U.S. Fish and Wildlife Service.
Drafting
By design, CITES regulates and monitors trade in the manner of a "negative list" such that trade in all species is permitted and unregulated unless the species in question appears on the Appendices or looks very much like one of those taxa. Then and only then, trade is regulated or constrained. Because the remit of the Convention covers millions of species of plants and animals, and tens of thousands of these taxa are potentially of economic value, in practice this negative list approach effectively forces CITES signatories to expend limited resources on just a select few, leaving many species to be traded with neither constraint nor review. For example, recently several bird classified as threatened with extinction appeared in the legal wild bird trade because the CITES process never considered their status. If a "positive list" approach were taken, only species evaluated and approved for the positive list would be permitted in trade, thus lightening the review burden for member states and the Secretariat, and also preventing inadvertent legal trade threats to poorly known species.
Specific weaknesses in the text include: it does not stipulate guidelines for the 'non-detriment' finding required of national Scientific Authorities; non-detriment findings require copious amounts of information; the 'household effects' clause is often not rigid enough/specific enough to prevent CITES violations by means of this Article (VII); non-reporting from Parties means Secretariat monitoring is incomplete; and it has no capacity to address domestic trade in listed species.
In order to ensure that the General Agreement on Tariffs and Trade (GATT) was not violated, the Secretariat of GATT was consulted during the drafting process.
Animal sourced pathogens
During the coronavirus pandemic in 2020 CEO Ivonne Higuero noted that illegal wildlife trade not only helps to destroy habitats, but these habitats create a safety barrier for humans that can prevent pathogens from animals passing themselves on to people.
Reform suggestions
Suggestions for improvement in the operation of CITES include: more regular missions by the Secretariat (not reserved just for high-profile species); improvement of national legislation and enforcement; better reporting by Parties (and the consolidation of information from all sources-NGOs, TRAFFIC, the wildlife trade monitoring network and Parties); more emphasis on enforcement-including a technical committee enforcement officer; the development of CITES Action Plans (akin to Biodiversity Action Plans related to the Convention on Biological Diversity) including: designation of Scientific/Management Authorities and national enforcement strategies; incentives for reporting and timelines for both Action Plans and reporting. CITES would benefit from access to Global Environment Facility (GEF), funds-although this is difficult given the GEFs more ecosystem approach-or other more regular funds. Development of a future mechanism similar to that of the Montreal Protocol (developed nations contribute to a fund for developing nations) could allow more funds for non-Secretariat activities.
TRAFFIC Data
From 2005 to 2009 the legal trade corresponded with these numbers:
317,000 live birds
More than 2 million live reptiles
2.5 million crocodile skins
2.1 million snake skins
73 tons of caviar
1.1 million beaver skins
Millions of pieces of coral
20,000 mammalian hunting trophies
In the 1990s the annual trade of legal animal products was $160 billion annually. In 2009 the estimated value almost doubled to $300 billion.
Traffic released a report in December, 2024 outlining illegal trade in animal products occurring in Vietnam:
Additional information about the documented trade can be extracted through queries on the CITES website.
Meetings
The Conference of the Parties (CoP) is held once every three years. The location of the next CoP is chosen at the close of each CoP by a secret ballot vote.
The CITES Committees (Animals Committee, Plants Committee and Standing Committee) hold meetings during each year that does not have a CoP, while the Standing committee meets also in years with a CoP. The Committee meetings take place in Geneva, Switzerland (where the Secretariat of the CITES Convention is located), unless another country offers to host the meeting. The Secretariat is administered by UNEP. The Animals and Plants Committees have sometimes held joint meetings. The previous joint meeting was held in March 2012 in Dublin, Ireland, and the latest one was held in Veracruz, Mexico, in May 2014.
A current list of upcoming meetings appears on the CITES calendar.
At the seventeenth Conference of the Parties (CoP 17), Namibia and Zimbabwe introduced proposals to amend their listing of elephant populations in Appendix II. Instead, they wished to establish controlled trade in all elephant specimens, including ivory. They argue that revenue from regulated trade could be used for elephant conservation and rural communities' development. However, both proposals were opposed by the US and other countries.
See also
Environmental agreements
Illegal logging
IUCN Red List
Ivory trade
Lacey Act
List of species protected by CITES Appendix I
List of species protected by CITES Appendix II
List of species protected by CITES Appendix III
Shark finning
Wildlife conservation
Wildlife Enforcement Monitoring System
Wildlife management
Wildlife smuggling
World Wildlife Day
Footnotes
References
Further reading
Oldfield, S. and McGough, N. (Comp.) 2007. A CITES manual for botanic gardens English version, Spanish version, Italian version Botanic Gardens Conservation International (BGCI)
External links
CITES Profile on database of market governance mechanisms (archived 13 March 2016)
Member countries (Parties)
Chronological list of Parties
Alphabetical list of Parties at CITES and at the depositary (PDF)
National contacts (archived 20 January 2011)
Lists of species included in Appendices I, II and III (i.e. species protected by CITES)
Explanation of the Appendices
Number of species on the Appendices
Species lists (Appendices I, II and III)
Environmental treaties
Endangered species
Treaties concluded in 1973
Treaties entered into force in 1975
Wildlife smuggling
1975 in the environment
Treaties of the Democratic Republic of Afghanistan
Treaties of Albania
Treaties of Algeria
Treaties of Angola
Treaties of Antigua and Barbuda
Treaties of Argentina
Treaties of Armenia
Treaties of Australia
Treaties of Austria
Treaties of Azerbaijan
Treaties of the Bahamas
Treaties of Bahrain
Treaties of Bangladesh
Treaties of Barbados
Treaties of Belarus
Treaties of Belgium
Treaties of Belize
Treaties of the People's Republic of Benin
Treaties of Bhutan
Treaties of Bolivia
Treaties of Bosnia and Herzegovina
Treaties of Botswana
Treaties of the military dictatorship in Brazil
Treaties of Brunei
Treaties of Bulgaria
Treaties of Burkina Faso
Treaties of Burundi
Treaties of Cambodia
Treaties of Cameroon
Treaties of Canada
Treaties of Cape Verde
Treaties of the Central African Republic
Treaties of Chad
Treaties of Chile
Treaties of the People's Republic of China
Treaties of Colombia
Treaties of the Comoros
Treaties of the Republic of the Congo
Treaties of Costa Rica
Treaties of Ivory Coast
Treaties of Croatia
Treaties of Cuba
Treaties of Cyprus
Treaties of Czechoslovakia
Treaties of the Czech Republic
Treaties of Zaire
Treaties of Denmark
Treaties of Djibouti
Treaties of Dominica
Treaties of the Dominican Republic
Treaties of Ecuador
Treaties of Egypt
Treaties of El Salvador
Treaties of Equatorial Guinea
Treaties of Eritrea
Treaties of Estonia
Treaties of the People's Democratic Republic of Ethiopia
Treaties of Fiji
Treaties of Finland
Treaties of France
Treaties of Gabon
Treaties of the Gambia
Treaties of Georgia (country)
Treaties of West Germany
Treaties of East Germany
Treaties of Ghana
Treaties of Greece
Treaties of Grenada
Treaties of Guatemala
Treaties of Guinea
Treaties of Guinea-Bissau
Treaties of Guyana
Treaties of Honduras
Treaties of the Hungarian People's Republic
Treaties of Iceland
Treaties of India
Treaties of Indonesia
Treaties of Pahlavi Iran
Treaties of Iraq
Treaties of Ireland
Treaties of Israel
Treaties of Italy
Treaties of Jamaica
Treaties of Japan
Treaties of Jordan
Treaties of Kazakhstan
Treaties of Kenya
Treaties of Kuwait
Treaties of Kyrgyzstan
Treaties of Laos
Treaties of Latvia
Treaties of Lebanon
Treaties of Lesotho
Treaties of Liberia
Treaties of the Libyan Arab Jamahiriya
Treaties of Liechtenstein
Treaties of Lithuania
Treaties of Luxembourg
Treaties of Madagascar
Treaties of Malawi
Treaties of Malaysia
Treaties of the Maldives
Treaties of Mali
Treaties of Malta
Treaties of Mauritania
Treaties of Mauritius
Treaties of Mexico
Treaties of Monaco
Treaties of Mongolia
Treaties of Montenegro
Treaties of Morocco
Treaties of the People's Republic of Mozambique
Treaties of Myanmar
Treaties of Namibia
Treaties of Nepal
Treaties of the Netherlands
Treaties of New Zealand
Treaties of Nicaragua
Treaties of Niger
Treaties of Nigeria
Treaties of Norway
Treaties of Oman
Treaties of Pakistan
Treaties of Palau
Treaties of Panama
Treaties of Papua New Guinea
Treaties of Paraguay
Treaties of Peru
Treaties of the Philippines
Treaties of Poland
Treaties of Portugal
Treaties of Qatar
Treaties of South Korea
Treaties of Moldova
Treaties of Romania
Treaties of Russia
Treaties of Rwanda
Treaties of Samoa
Treaties of San Marino
Treaties of São Tomé and Príncipe
Treaties of Saudi Arabia
Treaties of Senegal
Treaties of Serbia and Montenegro
Treaties of Seychelles
Treaties of Sierra Leone
Treaties of Singapore
Treaties of Slovakia
Treaties of Slovenia
Treaties of the Solomon Islands
Treaties of the Somali Democratic Republic
Treaties of South Africa
Treaties of Spain
Treaties of Sri Lanka
Treaties of Saint Kitts and Nevis
Treaties of Saint Lucia
Treaties of Saint Vincent and the Grenadines
Treaties of the Democratic Republic of the Sudan
Treaties of Suriname
Treaties of Eswatini
Treaties of Sweden
Treaties of Switzerland
Treaties of Syria
Treaties of Tajikistan
Treaties of Thailand
Treaties of North Macedonia
Treaties of Togo
Treaties of Tonga
Treaties of Trinidad and Tobago
Treaties of Tunisia
Treaties of Turkey
Treaties of Uganda
Treaties of Ukraine
Treaties of the United Arab Emirates
Treaties of the United Kingdom
Treaties of Tanzania
Treaties of the United States
Treaties of Uruguay
Treaties of Uzbekistan
Treaties of Vanuatu
Treaties of Venezuela
Treaties of Vietnam
Treaties of Yemen
Treaties of Yugoslavia
Treaties of Zambia
Treaties of Zimbabwe
Animal treaties
Treaties extended to Aruba
Treaties extended to the Netherlands Antilles
Treaties extended to Greenland
Treaties extended to Portuguese Macau
Treaties extended to British Honduras
Treaties extended to Bermuda
Treaties extended to the British Indian Ocean Territory
Treaties extended to the British Virgin Islands
Treaties extended to the Cayman Islands
Treaties extended to the Falkland Islands
Treaties extended to Gibraltar
Treaties extended to Guernsey
Treaties extended to British Hong Kong
Treaties extended to Jersey
Treaties extended to the Gilbert and Ellice Islands
Treaties extended to the Isle of Man
Treaties extended to Montserrat
Treaties extended to the Pitcairn Islands
Treaties extended to Saint Helena, Ascension and Tristan da Cunha
Treaties entered into by the European Union | CITES | [
"Biology"
] | 5,692 | [
"Biota by conservation status",
"Endangered species"
] |
6,203 | https://en.wikipedia.org/wiki/Environmental%20Modification%20Convention | The Environmental Modification Convention (ENMOD), formally the Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Techniques, is an international treaty prohibiting the military or other hostile use of environmental modification techniques having widespread, long-lasting or severe effects. It opened for signature on 18 May 1977 in Geneva and entered into force on 5 October 1978.
The Convention bans weather warfare, which is the use of weather modification techniques for the purposes of inducing damage or destruction. The Convention on Biological Diversity of 2010 would also ban some forms of weather modification or geoengineering.
Many states do not regard this as a complete ban on the use of herbicides in warfare, such as Agent Orange, but it does require case-by-case consideration.
Parties
The convention was signed by 48 states; 16 of the signatories have not ratified. As of 2022 the convention has 78 state parties.
History
The problem of artificial modification of the environment for military or other hostile purposes was brought to the international agenda in the early 1970s. Following the US decision of July 1972 to renounce the use of climate modification techniques for hostile purposes, the 1973 resolution by the US Senate calling for an international agreement "prohibiting the use of any environmental or geophysical modification activity as a weapon of war", and an in-depth review by the Department of Defense of the military aspects of weather and other environmental modification techniques, US decided to seek agreement with the Soviet Union to explore the possibilities of an international agreement.
In July 1974, US and USSR agreed to hold bilateral discussions on measures to overcome the danger of the use of environmental modification techniques for military purposes and three subsequent rounds of discussions in 1974 and 1975. In August 1975, US and USSR tabled identical draft texts of a convention at the Conference of the Committee on Disarmament (CCD), Conference on Disarmament, where intensive negotiations resulted in a modified text and understandings regarding four articles of this Convention in 1976.
The convention was approved by Resolution 31/72 of the General Assembly of the United Nations on 10 December 1976, by 96 to 8 votes with 30 abstentions.
Environmental Modification Technique
Environmental Modification Technique includes any technique for changing – through the deliberate manipulation of natural processes – the dynamics, composition or structure of the earth, including its biota, lithosphere, hydrosphere and atmosphere, or of outer space.
Structure of ENMOD
The Convention contains ten articles and one Annex on the Consultative Committee of Experts. Integral part of the convention are also the Understandings relating to articles I, II, III and VIII. These Understandings are not incorporated into the convention but are part of the negotiating record and were included in the report transmitted by the Conference of the Committee on Disarmament to the United Nations General Assembly in September 1976 Report of the Conference of the Committee on Disarmament, Volume I, General Assembly Official records: Thirty-first session, Supplement No. 27 (A/31/27), New York, United Nations, 1976, pp. 91–92.
Anthropogenic climate change
ENMOD treaty members are responsible for 83% of carbon dioxide emissions since the treaty entered into force in 1978. The ENMOD treaty could potentially be used by ENMOD member states seeking climate-change loss and damage compensation from other ENMOD member states at the International Court of Justice. With the knowledge that carbon dioxide emissions can enhance extreme weather events, the continued unmitigated greenhouse gas emissions from some ENMOD member states could be viewed as ‘reckless’ in the context of deliberately declining emissions from other ENMOD member states. It is unclear whether the International Court of Justice will consider the ENMOD treaty when it issues a legal opinion on international climate change obligations requested by the United Nations General Assembly on 29 March 2023.
See also
Arms control agreements
Environmental agreements
Climate engineering
Operation Popeye
United Nations Convention on Environmental Modification
References
Welcome! | UN GENEVA
External links
The text of the agreement compiled by the NGO Committee on Education
Ratifications
A Political Primer on the ENMOD Convention from the Sunshine Project.
Weather modification
Cold War treaties
International humanitarian law treaties
Environmental treaties
Treaties concluded in 1977
Treaties entered into force in 1978
1978 in the environment
Treaties of the Democratic Republic of Afghanistan
Treaties of Algeria
Treaties of Antigua and Barbuda
Treaties of Argentina
Treaties of Armenia
Treaties of Australia
Treaties of Austria
Treaties of Bangladesh
Treaties of the Byelorussian Soviet Socialist Republic
Treaties of Belgium
Treaties of the People's Republic of Benin
Treaties of the military dictatorship in Brazil
Treaties of Brunei
Treaties of Cameroon
Treaties of Canada
Treaties of Cape Verde
Treaties of Chile
Treaties of the People's Republic of China
Treaties of Costa Rica
Treaties of Cuba
Treaties of Cyprus
Treaties of the Czech Republic
Treaties of Denmark
Treaties of Dominica
Treaties of Egypt
Treaties of Estonia
Treaties of Finland
Treaties of West Germany
Treaties of East Germany
Treaties of Ghana
Treaties of Greece
Treaties of Guatemala
Treaties of Honduras
Treaties of the Hungarian People's Republic
Treaties of India
Treaties of Ireland
Treaties of Japan
Treaties of Kazakhstan
Treaties of Kuwait
Treaties of Kyrgyzstan
Treaties of Laos
Treaties of Lithuania
Treaties of Malawi
Treaties of Mauritius
Treaties of the Mongolian People's Republic
Treaties of the Netherlands
Treaties of New Zealand
Treaties of Nicaragua
Treaties of Niger
Treaties of North Korea
Treaties of Norway
Treaties of Pakistan
Treaties of Panama
Treaties of Papua New Guinea
Treaties of the Polish People's Republic
Treaties of the Socialist Republic of Romania
Treaties of Saint Kitts and Nevis
Treaties of the Soviet Union
Treaties of Saint Lucia
Treaties of Saint Vincent and the Grenadines
Treaties of São Tomé and Príncipe
Treaties of Slovakia
Treaties of Slovenia
Treaties of the Solomon Islands
Treaties of South Korea
Treaties of Spain
Treaties of Sri Lanka
Treaties of Sweden
Treaties of Switzerland
Treaties of Tajikistan
Treaties of Tunisia
Treaties of the Ukrainian Soviet Socialist Republic
Treaties of the United Kingdom
Treaties of the United States
Treaties of Uruguay
Treaties of Uzbekistan
Treaties of Vietnam
Treaties of the Yemen Arab Republic
Treaties of South Yemen
Treaties extended to Akrotiri and Dhekelia
Treaties extended to the Netherlands Antilles
Treaties extended to the Cook Islands
Treaties extended to Niue
Treaties extended to Greenland
Treaties extended to the Faroe Islands
United Nations treaties
Treaties extended to Anguilla
Treaties extended to Bermuda
Treaties extended to the British Virgin Islands
Treaties extended to the Cayman Islands
Treaties extended to the Falkland Islands
Treaties extended to Gibraltar
Treaties extended to Montserrat
Treaties extended to the Pitcairn Islands
Treaties extended to Saint Helena, Ascension and Tristan da Cunha
Treaties extended to South Georgia and the South Sandwich Islands
Treaties extended to the Turks and Caicos Islands
Treaties extended to Brunei (protectorate)
Treaties extended to British Antigua and Barbuda
Treaties extended to Saint Christopher-Nevis-Anguilla
Treaties extended to British Saint Vincent and the Grenadines
Treaties extended to British Saint Lucia
Treaties extended to the British Solomon Islands
Treaties extended to British Hong Kong
Treaties extended to Macau
Treaties extended to British Dominica | Environmental Modification Convention | [
"Engineering"
] | 1,376 | [
"Planetary engineering",
"Weather modification"
] |
6,206 | https://en.wikipedia.org/wiki/Computable%20number | In mathematics, computable numbers are the real numbers that can be computed to within any desired precision by a finite, terminating algorithm. They are also known as the recursive numbers, effective numbers, computable reals, or recursive reals. The concept of a computable real number was introduced by Émile Borel in 1912, using the intuitive notion of computability available at the time.
Equivalent definitions can be given using μ-recursive functions, Turing machines, or λ-calculus as the formal representation of algorithms. The computable numbers form a real closed field and can be used in the place of real numbers for many, but not all, mathematical purposes.
Informal definition
In the following, Marvin Minsky defines the numbers to be computed in a manner similar to those defined by Alan Turing in 1936; i.e., as "sequences of digits interpreted as decimal fractions" between 0 and 1:
The key notions in the definition are (1) that some n is specified at the start, (2) for any n the computation only takes a finite number of steps, after which the machine produces the desired output and terminates.
An alternate form of (2) – the machine successively prints all n of the digits on its tape, halting after printing the nth – emphasizes Minsky's observation: (3) That by use of a Turing machine, a finite definition – in the form of the machine's state table – is being used to define what is a potentially infinite string of decimal digits.
This is however not the modern definition which only requires the result be accurate to within any given accuracy. The informal definition above is subject to a rounding problem called the table-maker's dilemma whereas the modern definition is not.
Formal definition
A real number a is computable if it can be approximated by some computable function in the following manner: given any positive integer n, the function produces an integer f(n) such that:
A complex number is called computable if its real and imaginary parts are computable.
Equivalent definitions
There are two similar definitions that are equivalent:
There exists a computable function which, given any positive rational error bound , produces a rational number r such that
There is a computable sequence of rational numbers converging to such that for each i.
There is another equivalent definition of computable numbers via computable Dedekind cuts. A computable Dedekind cut is a computable function which when provided with a rational number as input returns or , satisfying the following conditions:
An example is given by a program D that defines the cube root of 3. Assuming this is defined by:
A real number is computable if and only if there is a computable Dedekind cut D corresponding to it. The function D is unique for each computable number (although of course two different programs may provide the same function).
Properties
Not computably enumerable
Assigning a Gödel number to each Turing machine definition produces a subset of the natural numbers corresponding to the computable numbers and identifies a surjection from to the computable numbers. There are only countably many Turing machines, showing that the computable numbers are subcountable. The set of these Gödel numbers, however, is not computably enumerable (and consequently, neither are subsets of that are defined in terms of it). This is because there is no algorithm to determine which Gödel numbers correspond to Turing machines that produce computable reals. In order to produce a computable real, a Turing machine must compute a total function, but the corresponding decision problem is in Turing degree 0′′. Consequently, there is no surjective computable function from the natural numbers to the set of machines representing computable reals, and Cantor's diagonal argument cannot be used constructively to demonstrate uncountably many of them.
While the set of real numbers is uncountable, the set of computable numbers is classically countable and thus almost all real numbers are not computable. Here, for any given computable number the well ordering principle provides that there is a minimal element in which corresponds to , and therefore there exists a subset consisting of the minimal elements, on which the map is a bijection. The inverse of this bijection is an injection into the natural numbers of the computable numbers, proving that they are countable. But, again, this subset is not computable, even though the computable reals are themselves ordered.
Properties as a field
The arithmetical operations on computable numbers are themselves computable in the sense that whenever real numbers a and b are computable then the following real numbers are also computable: a + b, a - b, ab, and a/b if b is nonzero.
These operations are actually uniformly computable; for example, there is a Turing machine which on input (A,B,) produces output r, where A is the description of a Turing machine approximating a, B is the description of a Turing machine approximating b, and r is an approximation of a+b.
The fact that computable real numbers form a field was first proved by Henry Gordon Rice in 1954.
Computable reals however do not form a computable field, because the definition of a computable field requires effective equality.
Non-computability of the ordering
The order relation on the computable numbers is not computable. Let A be the description of a Turing machine approximating the number . Then there is no Turing machine which on input A outputs "YES" if and "NO" if To see why, suppose the machine described by A keeps outputting 0 as approximations. It is not clear how long to wait before deciding that the machine will never output an approximation which forces a to be positive. Thus the machine will eventually have to guess that the number will equal 0, in order to produce an output; the sequence may later become different from 0. This idea can be used to show that the machine is incorrect on some sequences if it computes a total function. A similar problem occurs when the computable reals are represented as Dedekind cuts. The same holds for the equality relation: the equality test is not computable.
While the full order relation is not computable, the restriction of it to pairs of unequal numbers is computable. That is, there is a program that takes as input two Turing machines A and B approximating numbers and , where , and outputs whether or It is sufficient to use -approximations where so by taking increasingly small (approaching 0), one eventually can decide whether or
Other properties
The computable real numbers do not share all the properties of the real numbers used in analysis. For example, the least upper bound of a bounded increasing computable sequence of computable real numbers need not be a computable real number. A sequence with this property is known as a Specker sequence, as the first construction is due to Ernst Specker in 1949. Despite the existence of counterexamples such as these, parts of calculus and real analysis can be developed in the field of computable numbers, leading to the study of computable analysis.
Every computable number is arithmetically definable, but not vice versa. There are many arithmetically definable, noncomputable real numbers, including:
any number that encodes the solution of the halting problem (or any other undecidable problem) according to a chosen encoding scheme.
Chaitin's constant, , which is a type of real number that is Turing equivalent to the halting problem.
Both of these examples in fact define an infinite set of definable, uncomputable numbers, one for each universal Turing machine.
A real number is computable if and only if the set of natural numbers it represents (when written in binary and viewed as a characteristic function) is computable.
The set of computable real numbers (as well as every countable, densely ordered subset of computable reals without ends) is order-isomorphic to the set of rational numbers.
Digit strings and the Cantor and Baire spaces
Turing's original paper defined computable numbers as follows:
(The decimal expansion of a only refers to the digits following the decimal point.)
Turing was aware that this definition is equivalent to the -approximation definition given above. The argument proceeds as follows: if a number is computable in the Turing sense, then it is also computable in the sense: if , then the first n digits of the decimal expansion for a provide an approximation of a. For the converse, we pick an computable real number a and generate increasingly precise approximations until the nth digit after the decimal point is certain. This always generates a decimal expansion equal to a but it may improperly end in an infinite sequence of 9's in which case it must have a finite (and thus computable) proper decimal expansion.
Unless certain topological properties of the real numbers are relevant, it is often more convenient to deal with elements of (total 0,1 valued functions) instead of reals numbers in . The members of can be identified with binary decimal expansions, but since the decimal expansions and denote the same real number, the interval can only be bijectively (and homeomorphically under the subset topology) identified with the subset of not ending in all 1's.
Note that this property of decimal expansions means that it is impossible to effectively identify the computable real numbers defined in terms of a decimal expansion and those defined in the approximation sense. Hirst has shown that there is no algorithm which takes as input the description of a Turing machine which produces approximations for the computable number a, and produces as output a Turing machine which enumerates the digits of a in the sense of Turing's definition. Similarly, it means that the arithmetic operations on the computable reals are not effective on their decimal representations as when adding decimal numbers. In order to produce one digit, it may be necessary to look arbitrarily far to the right to determine if there is a carry to the current location. This lack of uniformity is one reason why the contemporary definition of computable numbers uses approximations rather than decimal expansions.
However, from a computability theoretic or measure theoretic perspective, the two structures and are essentially identical. Thus, computability theorists often refer to members of as reals. While is totally disconnected, for questions about classes or randomness it is easier to work in .
Elements of are sometimes called reals as well and though containing a homeomorphic image of , isn't even locally compact (in addition to being totally disconnected). This leads to genuine differences in the computational properties. For instance the satisfying , with quantifier free, must be computable while the unique satisfying a universal formula may have an arbitrarily high position in the hyperarithmetic hierarchy.
Use in place of the reals
The computable numbers include the specific real numbers which appear in practice, including all real algebraic numbers, as well as e, π, and many other transcendental numbers. Though the computable reals exhaust those reals we can calculate or approximate, the assumption that all reals are computable leads to substantially different conclusions about the real numbers. The question naturally arises of whether it is possible to dispose of the full set of reals and use computable numbers for all of mathematics. This idea is appealing from a constructivist point of view, and has been pursued by the Russian school of constructive mathematics.
To actually develop analysis over computable numbers, some care must be taken. For example, if one uses the classical definition of a sequence, the set of computable numbers is not closed under the basic operation of taking the supremum of a bounded sequence (for example, consider a Specker sequence, see the section above). This difficulty is addressed by considering only sequences which have a computable modulus of convergence. The resulting mathematical theory is called computable analysis.
Implementations of exact arithmetic
Computer packages representing real numbers as programs computing approximations have been proposed as early as 1985, under the name "exact arithmetic". Modern examples include the CoRN library (Coq), and the RealLib package (C++). A related line of work is based on taking a real RAM program and running it with rational or floating-point numbers of sufficient precision, such as the package.
See also
Constructible number
Definable number
Semicomputable function
Transcomputational problem
Notes
References
Computable numbers (and Turing's a-machines) were introduced in this paper; the definition of computable numbers uses infinite decimal sequences.
Further reading
This paper describes the development of the calculus over the computable number field.
§1.3.2 introduces the definition by nested sequences of intervals converging to the singleton real. Other representations are discussed in §4.1.
Computability theory
Theory of computation | Computable number | [
"Mathematics"
] | 2,724 | [
"Computability theory",
"Mathematical logic"
] |
6,207 | https://en.wikipedia.org/wiki/Electric%20current | An electric current is a flow of charged particles, such as electrons or ions, moving through an electrical conductor or space. It is defined as the net rate of flow of electric charge through a surface. The moving particles are called charge carriers, which may be one of several types of particles, depending on the conductor. In electric circuits the charge carriers are often electrons moving through a wire. In semiconductors they can be electrons or holes. In an electrolyte the charge carriers are ions, while in plasma, an ionized gas, they are ions and electrons.
In the International System of Units (SI), electric current is expressed in units of ampere (sometimes called an "amp", symbol A), which is equivalent to one coulomb per second. The ampere is an SI base unit and electric current is a base quantity in the International System of Quantities (ISQ). Electric current is also known as amperage and is measured using a device called an ammeter.
Electric currents create magnetic fields, which are used in motors, generators, inductors, and transformers. In ordinary conductors, they cause Joule heating, which creates light in incandescent light bulbs. Time-varying currents emit electromagnetic waves, which are used in telecommunications to broadcast information.
Symbol
The conventional symbol for current is , which originates from the French phrase , (current intensity). Current intensity is often referred to simply as current. The symbol was used by André-Marie Ampère, after whom the unit of electric current is named, in formulating Ampère's force law (1820). The notation travelled from France to Great Britain, where it became standard, although at least one journal did not change from using to until 1896.
Conventions
The conventional direction of current, also known as conventional current, is arbitrarily defined as the direction in which charges flow. In a conductive material, the moving charged particles that constitute the electric current are called charge carriers. In metals, which make up the wires and other conductors in most electrical circuits, the positively charged atomic nuclei of the atoms are held in a fixed position, and the negatively charged electrons are the charge carriers, free to move about in the metal. In other materials, notably the semiconductors, the charge carriers can be positive or negative, depending on the dopant used. Positive and negative charge carriers may even be present at the same time, as happens in an electrolyte in an electrochemical cell.
A flow of positive charges gives the same electric current, and has the same effect in a circuit, as an equal flow of negative charges in the opposite direction. Since current can be the flow of either positive or negative charges, or both, a convention is needed for the direction of current that is independent of the type of charge carriers. Negatively charged carriers, such as the electrons (the charge carriers in metal wires and many other electronic circuit components), therefore flow in the opposite direction of conventional current flow in an electrical circuit.
Reference direction
A current in a wire or circuit element can flow in either of two directions. When defining a variable to represent the current, the direction representing positive current must be specified, usually by an arrow on the circuit schematic diagram. This is called the reference direction of the current . When analyzing electrical circuits, the actual direction of current through a specific circuit element is usually unknown until the analysis is completed. Consequently, the reference directions of currents are often assigned arbitrarily. When the circuit is solved, a negative value for the current implies the actual direction of current through that circuit element is opposite that of the chosen reference direction.
Ohm's law
Ohm's law states that the current through a conductor between two points is directly proportional to the potential difference across the two points. Introducing the constant of proportionality, the resistance, one arrives at the usual mathematical equation that describes this relationship:
where I is the current through the conductor in units of amperes, V is the potential difference measured across the conductor in units of volts, and R is the resistance of the conductor in units of ohms. More specifically, Ohm's law states that the R in this relation is constant, independent of the current.
Alternating and direct current
In alternating current (AC) systems, the movement of electric charge periodically reverses direction. AC is the form of electric power most commonly delivered to businesses and residences. The usual waveform of an AC power circuit is a sine wave, though certain applications use alternative waveforms, such as triangular or square waves. Audio and radio signals carried on electrical wires are also examples of alternating current. An important goal in these applications is recovery of information encoded (or modulated) onto the AC signal.
In contrast, direct current (DC) refers to a system in which the movement of electric charge in only one direction (sometimes called unidirectional flow). Direct current is produced by sources such as batteries, thermocouples, solar cells, and commutator-type electric machines of the dynamo type. Alternating current can also be converted to direct current through use of a rectifier. Direct current may flow in a conductor such as a wire, but can also flow through semiconductors, insulators, or even through a vacuum as in electron or ion beams. An old name for direct current was galvanic current.
Occurrences
Natural observable examples of electric current include lightning, static electric discharge, and the solar wind, the source of the polar auroras.
Man-made occurrences of electric current include the flow of conduction electrons in metal wires such as the overhead power lines that deliver electrical energy across long distances and the smaller wires within electrical and electronic equipment. Eddy currents are electric currents that occur in conductors exposed to changing magnetic fields. Similarly, electric currents occur, particularly in the surface, of conductors exposed to electromagnetic waves. When oscillating electric currents flow at the correct voltages within radio antennas, radio waves are generated.
In electronics, other forms of electric current include the flow of electrons through resistors or through the vacuum in a vacuum tube, the flow of ions inside a battery, and the flow of holes within metals and semiconductors.
A biological example of current is the flow of ions in neurons and nerves, responsible for both thought and sensory perception.
Measurement
Current can be measured using an ammeter.
Electric current can be directly measured with a galvanometer, but this method involves breaking the electrical circuit, which is sometimes inconvenient.
Current can also be measured without breaking the circuit by detecting the magnetic field associated with the current.
Devices, at the circuit level, use various techniques to measure current:
Shunt resistors
Hall effect current sensor transducers
Transformers (however DC cannot be measured)
Magnetoresistive field sensors
Rogowski coils
Current clamps
Resistive heating
Joule heating, also known as ohmic heating and resistive heating, is the process of power dissipation by which the passage of an electric current through a conductor increases the internal energy of the conductor, converting thermodynamic work into heat. The phenomenon was first studied by James Prescott Joule in 1841. Joule immersed a length of wire in a fixed mass of water and measured the temperature rise due to a known current through the wire for a 30 minute period. By varying the current and the length of the wire he deduced that the heat produced was proportional to the square of the current multiplied by the electrical resistance of the wire.
This relationship is known as Joule's Law. The SI unit of energy was subsequently named the joule and given the symbol J. The commonly known SI unit of power, the watt (symbol: W), is equivalent to one joule per second.
Electromagnetism
Electromagnet
In an electromagnet a coil of wires behaves like a magnet when an electric current flows through it. When the current is switched off, the coil loses its magnetism immediately.
Electric current produces a magnetic field. The magnetic field can be visualized as a pattern of circular field lines surrounding the wire that persists as long as there is current.
Electromagnetic induction
Magnetic fields can also be used to make electric currents. When a changing magnetic field is applied to a conductor, an electromotive force (EMF) is induced, which starts an electric current, when there is a suitable path.
Radio waves
When an electric current flows in a suitably shaped conductor at radio frequencies, radio waves can be generated. These travel at the speed of light and can cause electric currents in distant conductors.
Conduction mechanisms in various media
In metallic solids, electric charge flows by means of electrons, from lower to higher electrical potential. In other media, any stream of charged objects (ions, for example) may constitute an electric current. To provide a definition of current independent of the type of charge carriers, conventional current is defined as moving in the same direction as the positive charge flow. So, in metals where the charge carriers (electrons) are negative, conventional current is in the opposite direction to the overall electron movement. In conductors where the charge carriers are positive, conventional current is in the same direction as the charge carriers.
In a vacuum, a beam of ions or electrons may be formed. In other conductive materials, the electric current is due to the flow of both positively and negatively charged particles at the same time. In still others, the current is entirely due to positive charge flow. For example, the electric currents in electrolytes are flows of positively and negatively charged ions. In a common lead-acid electrochemical cell, electric currents are composed of positive hydronium ions flowing in one direction, and negative sulfate ions flowing in the other. Electric currents in sparks or plasma are flows of electrons as well as positive and negative ions. In ice and in certain solid electrolytes, the electric current is entirely composed of flowing ions.
Metals
In a metal, some of the outer electrons in each atom are not bound to the individual molecules as they are in molecular solids, or in full bands as they are in insulating materials, but are free to move within the metal lattice. These conduction electrons can serve as charge carriers, carrying a current. Metals are particularly conductive because there are many of these free electrons. With no external electric field applied, these electrons move about randomly due to thermal energy but, on average, there is zero net current within the metal. At room temperature, the average speed of these random motions is 106 metres per second. Given a surface through which a metal wire passes, electrons move in both directions across the surface at an equal rate. As George Gamow wrote in his popular science book, One, Two, Three...Infinity (1947), "The metallic substances differ from all other materials by the fact that the outer shells of their atoms are bound rather loosely, and often let one of their electrons go free. Thus the interior of a metal is filled up with a large number of unattached electrons that travel aimlessly around like a crowd of displaced persons. When a metal wire is subjected to electric force applied on its opposite ends, these free electrons rush in the direction of the force, thus forming what we call an electric current."
When a metal wire is connected across the two terminals of a DC voltage source such as a battery, the source places an electric field across the conductor. The moment contact is made, the free electrons of the conductor are forced to drift toward the positive terminal under the influence of this field. The free electrons are therefore the charge carrier in a typical solid conductor.
For a steady flow of charge through a surface, the current I (in amperes) can be calculated with the following equation:
where Q is the electric charge transferred through the surface over a time t. If Q and t are measured in coulombs and seconds respectively, I is in amperes.
More generally, electric current can be represented as the rate at which charge flows through a given surface as:
Electrolytes
Electric currents in electrolytes are flows of electrically charged particles (ions). For example, if an electric field is placed across a solution of Na+ and Cl− (and conditions are right) the sodium ions move towards the negative electrode (cathode), while the chloride ions move towards the positive electrode (anode). Reactions take place at both electrode surfaces, neutralizing each ion.
Water-ice and certain solid electrolytes called proton conductors contain positive hydrogen ions ("protons") that are mobile. In these materials, electric currents are composed of moving protons, as opposed to the moving electrons in metals.
In certain electrolyte mixtures, brightly coloured ions are the moving electric charges. The slow progress of the colour makes the current visible.
Gases and plasmas
In air and other ordinary gases below the breakdown field, the dominant source of electrical conduction is via relatively few mobile ions produced by radioactive gases, ultraviolet light, or cosmic rays. Since the electrical conductivity is low, gases are dielectrics or insulators. However, once the applied electric field approaches the breakdown value, free electrons become sufficiently accelerated by the electric field to create additional free electrons by colliding, and ionizing, neutral gas atoms or molecules in a process called avalanche breakdown. The breakdown process forms a plasma that contains enough mobile electrons and positive ions to make it an electrical conductor. In the process, it forms a light emitting conductive path, such as a spark, arc or lightning.
Plasma is the state of matter where some of the electrons in a gas are stripped or "ionized" from their molecules or atoms. A plasma can be formed by high temperature, or by application of a high electric or alternating magnetic field as noted above. Due to their lower mass, the electrons in a plasma accelerate more quickly in response to an electric field than the heavier positive ions, and hence carry the bulk of the current. The free ions recombine to create new chemical compounds (for example, breaking atmospheric oxygen into single oxygen [O2 → 2O], which then recombine creating ozone [O3]).
Vacuum
Since a "perfect vacuum" contains no charged particles, it normally behaves as a perfect insulator. However, metal electrode surfaces can cause a region of the vacuum to become conductive by injecting free electrons or ions through either field electron emission or thermionic emission. Thermionic emission occurs when the thermal energy exceeds the metal's work function, while field electron emission occurs when the electric field at the surface of the metal is high enough to cause tunneling, which results in the ejection of free electrons from the metal into the vacuum. Externally heated electrodes are often used to generate an electron cloud as in the filament or indirectly heated cathode of vacuum tubes. Cold electrodes can also spontaneously produce electron clouds via thermionic emission when small incandescent regions (called cathode spots or anode spots) are formed. These are incandescent regions of the electrode surface that are created by a localized high current. These regions may be initiated by field electron emission, but are then sustained by localized thermionic emission once a vacuum arc forms. These small electron-emitting regions can form quite rapidly, even explosively, on a metal surface subjected to a high electrical field. Vacuum tubes and sprytrons are some of the electronic switching and amplifying devices based on vacuum conductivity.
Superconductivity
Superconductivity is a phenomenon of exactly zero electrical resistance and expulsion of magnetic fields occurring in certain materials when cooled below a characteristic critical temperature. It was discovered by Heike Kamerlingh Onnes on April 8, 1911 in Leiden. Like ferromagnetism and atomic spectral lines, superconductivity is a quantum mechanical phenomenon. It is characterized by the Meissner effect, the complete ejection of magnetic field lines from the interior of the superconductor as it transitions into the superconducting state. The occurrence of the Meissner effect indicates that superconductivity cannot be understood simply as the idealization of perfect conductivity in classical physics.
Semiconductor
In a semiconductor it is sometimes useful to think of the current as due to the flow of positive "holes" (the mobile positive charge carriers that are places where the semiconductor crystal is missing a valence electron). This is the case in a p-type semiconductor. A semiconductor has electrical conductivity intermediate in magnitude between that of a conductor and an insulator. This means a conductivity roughly in the range of 10−2 to 104 siemens per centimeter (S⋅cm−1).
In the classic crystalline semiconductors, electrons can have energies only within certain bands (i.e. ranges of levels of energy). Energetically, these bands are located between the energy of the ground state, the state in which electrons are tightly bound to the atomic nuclei of the material, and the free electron energy, the latter describing the energy required for an electron to escape entirely from the material. The energy bands each correspond to many discrete quantum states of the electrons, and most of the states with low energy (closer to the nucleus) are occupied, up to a particular band called the valence band. Semiconductors and insulators are distinguished from metals because the valence band in any given metal is nearly filled with electrons under usual operating conditions, while very few (semiconductor) or virtually none (insulator) of them are available in the conduction band, the band immediately above the valence band.
The ease of exciting electrons in the semiconductor from the valence band to the conduction band depends on the band gap between the bands. The size of this energy band gap serves as an arbitrary dividing line (roughly 4 eV) between semiconductors and insulators.
With covalent bonds, an electron moves by hopping to a neighboring bond. The Pauli exclusion principle requires that the electron be lifted into the higher anti-bonding state of that bond. For delocalized states, for example in one dimensionthat is in a nanowire, for every energy there is a state with electrons flowing in one direction and another state with the electrons flowing in the other. For a net current to flow, more states for one direction than for the other direction must be occupied. For this to occur, energy is required, as in the semiconductor the next higher states lie above the band gap. Often this is stated as: full bands do not contribute to the electrical conductivity. However, as a semiconductor's temperature rises above absolute zero, there is more energy in the semiconductor to spend on lattice vibration and on exciting electrons into the conduction band. The current-carrying electrons in the conduction band are known as free electrons, though they are often simply called electrons if that is clear in context.
Current density and Ohm's law
Current density is the rate at which charge passes through a chosen unit area. It is defined as a vector whose magnitude is the current per unit cross-sectional area. As discussed in Reference direction, the direction is arbitrary. Conventionally, if the moving charges are positive, then the current density has the same sign as the velocity of the charges. For negative charges, the sign of the current density is opposite to the velocity of the charges. In SI units, current density (symbol: j) is expressed in the SI base units of amperes per square metre.
In linear materials such as metals, and under low frequencies, the current density across the conductor surface is uniform. In such conditions, Ohm's law states that the current is directly proportional to the potential difference between two ends (across) of that metal (ideal) resistor (or other ohmic device):
where is the current, measured in amperes; is the potential difference, measured in volts; and is the resistance, measured in ohms. For alternating currents, especially at higher frequencies, skin effect causes the current to spread unevenly across the conductor cross-section, with higher density near the surface, thus increasing the apparent resistance.
Drift speed
The mobile charged particles within a conductor move constantly in random directions, like the particles of a gas. (More accurately, a Fermi gas.) To create a net flow of charge, the particles must also move together with an average drift rate. Electrons are the charge carriers in most metals and they follow an erratic path, bouncing from atom to atom, but generally drifting in the opposite direction of the electric field. The speed they drift at can be calculated from the equation:
where
is the electric current
is number of charged particles per unit volume (or charge carrier density)
is the cross-sectional area of the conductor
is the drift velocity, and
is the charge on each particle.
Typically, electric charges in solids flow slowly. For example, in a copper wire of cross-section 0.5 mm2, carrying a current of 5 A, the drift velocity of the electrons is on the order of a millimetre per second. To take a different example, in the near-vacuum inside a cathode-ray tube, the electrons travel in near-straight lines at about a tenth of the speed of light.
Any accelerating electric charge, and therefore any changing electric current, gives rise to an electromagnetic wave that propagates at very high speed outside the surface of the conductor. This speed is usually a significant fraction of the speed of light, as can be deduced from Maxwell's equations, and is therefore many times faster than the drift velocity of the electrons. For example, in AC power lines, the waves of electromagnetic energy propagate through the space between the wires, moving from a source to a distant load, even though the electrons in the wires only move back and forth over a tiny distance.
The ratio of the speed of the electromagnetic wave to the speed of light in free space is called the velocity factor, and depends on the electromagnetic properties of the conductor and the insulating materials surrounding it, and on their shape and size.
The magnitudes (not the natures) of these three velocities can be illustrated by an analogy with the three similar velocities associated with gases. (See also hydraulic analogy.)
The low drift velocity of charge carriers is analogous to air motion; in other words, winds.
The high speed of electromagnetic waves is roughly analogous to the speed of sound in a gas (sound waves move through air much faster than large-scale motions such as convection)
The random motion of charges is analogous to heatthe thermal velocity of randomly vibrating gas particles.
See also
Current density
Displacement current (electric) and
Electric shock
Electrical measurements
History of electrical engineering
Polarity symbols
International System of Quantities
SI electromagnetism units
Single-phase electric power
Static electricity
Three-phase electric power
Two-phase electric power
Notes
References
SI base quantities
Electromagnetic quantities | Electric current | [
"Physics",
"Mathematics"
] | 4,650 | [
"Electromagnetic quantities",
"Physical quantities",
"SI base quantities",
"Quantity",
"Electric current",
"Wikipedia categories named after physical quantities"
] |
6,211 | https://en.wikipedia.org/wiki/Context-sensitive%20grammar | A context-sensitive grammar (CSG) is a formal grammar in which the left-hand sides and right-hand sides of any production rules may be surrounded by a context of terminal and nonterminal symbols. Context-sensitive grammars are more general than context-free grammars, in the sense that there are languages that can be described by a CSG but not by a context-free grammar. Context-sensitive grammars are less general (in the same sense) than unrestricted grammars. Thus, CSGs are positioned between context-free and unrestricted grammars in the Chomsky hierarchy.
A formal language that can be described by a context-sensitive grammar, or, equivalently, by a noncontracting grammar or a linear bounded automaton, is called a context-sensitive language. Some textbooks actually define CSGs as non-contracting, although this is not how Noam Chomsky defined them in 1959. This choice of definition makes no difference in terms of the languages generated (i.e. the two definitions are weakly equivalent), but it does make a difference in terms of what grammars are structurally considered context-sensitive; the latter issue was analyzed by Chomsky in 1963.
Chomsky introduced context-sensitive grammars as a way to describe the syntax of natural language where it is often the case that a word may or may not be appropriate in a certain place depending on the context. Walter Savitch has criticized the terminology "context-sensitive" as misleading and proposed "non-erasing" as better explaining the distinction between a CSG and an unrestricted grammar.
Although it is well known that certain features of languages (e.g. cross-serial dependency) are not context-free, it is an open question how much of CSGs' expressive power is needed to capture the context sensitivity found in natural languages. Subsequent research in this area has focused on the more computationally tractable mildly context-sensitive languages. The syntaxes of some visual programming languages can be described by context-sensitive graph grammars.
Formal definition
Formal grammar
Let us notate a formal grammar as , with a set of nonterminal symbols, a set of terminal symbols, a set of production rules, and the start symbol.
A string directly yields, or directly derives to, a string , denoted as , if v can be obtained from u by an application of some production rule in P, that is, if and , where is a production rule, and is the unaffected left and right part of the string, respectively.
More generally, u is said to yield, or derive to, v, denoted as , if v can be obtained from u by repeated application of production rules, that is, if for some n ≥ 0 and some strings . In other words, the relation is the reflexive transitive closure of the relation .
The language of the grammar G is the set of all terminal-symbol strings derivable from its start symbol, formally: .
Derivations that do not end in a string composed of terminal symbols only are possible, but do not contribute to L(G).
Context-sensitive grammar
A formal grammar is context-sensitive if each rule in P is either of the form where is the empty string, or of the form
αAβ → αγβ
with A ∈ N, , and .
The name context-sensitive is explained by the α and β that form the context of A and determine whether A can be replaced with γ or not.
By contrast, in a context-free grammar, no context is present: the left hand side of every production rule is just a nonterminal.
The string γ is not allowed to be empty. Without this restriction, the resulting grammars become equal in power to unrestricted grammars.
(Weakly) equivalent definitions
A noncontracting grammar is a grammar in which for any production rule, of the form u → v, the length of u is less than or equal to the length of v.
Every context-sensitive grammar is noncontracting, while every noncontracting grammar can be converted into an equivalent context-sensitive grammar; the two classes are weakly equivalent.
Some authors use the term context-sensitive grammar to refer to noncontracting grammars in general.
The left-context- and right-context-sensitive grammars are defined by restricting the rules to just the form αA → αγ and to just Aβ → γβ, respectively. The languages generated by these grammars are also the full class of context-sensitive languages. The equivalence was established by Penttonen normal form.
Examples
anbncn
The following context-sensitive grammar, with start symbol S, generates the canonical non-context-free language { anbncn | n ≥ 1 } :
Rules 1 and 2 allow for blowing-up S to anBC(BC)n−1; rules 3 to 6 allow for successively exchanging each CB to BC (four rules are needed for that since a rule CB → BC wouldn't fit into the scheme αAβ → αγβ); rules 7–10 allow replacing a non-terminal B or C with its corresponding terminal b or c, respectively, provided it is in the right place.
A generation chain for is:
S
→2
→2
→1
→3
→4
→5
→6
→3
→4
→5
→6
→3
→4
→5
→6
→7
→8
→8
→9
→10
→10
anbncndn, etc.
More complicated grammars can be used to parse { anbncndn | n ≥ 1 }, and other languages with even more letters. Here we show a simpler approach using non-contracting grammars:
Start with a kernel of regular productions generating the sentential forms
and then include the non contracting productions
,
,
,
,
,
,
,
,
,
.
ambncmdn
A non contracting grammar (for which there is an equivalent CSG) for the language is defined by
,
,
,
,
,
,
, and
.
With these definitions, a derivation for is:
.
a2i
A noncontracting grammar for the language { a2i | i ≥ 1 } is constructed in Example 9.5 (p. 224) of (Hopcroft, Ullman, 1979):
Kuroda normal form
Every context-sensitive grammar which does not generate the empty string can be transformed into a weakly equivalent one in Kuroda normal form. "Weakly equivalent" here means that the two grammars generate the same language. The normal form will not in general be context-sensitive, but will be a noncontracting grammar.
The Kuroda normal form is an actual normal form for non-contracting grammars.
Properties and uses
Equivalence to linear bounded automaton
A formal language can be described by a context-sensitive grammar if and only if it is accepted by some linear bounded automaton (LBA). In some textbooks this result is attributed solely to Landweber and Kuroda. Others call it the Myhill–Landweber–Kuroda theorem. (Myhill introduced the concept of deterministic LBA in 1960. Peter S. Landweber published in 1963 that the language accepted by a deterministic LBA is context sensitive. Kuroda introduced the notion of non-deterministic LBA and the equivalence between LBA and CSGs in 1964.)
it is still an open question whether every context-sensitive language can be accepted by a deterministic LBA.
Closure properties
Context-sensitive languages are closed under complement. This 1988 result is known as the Immerman–Szelepcsényi theorem.
Moreover, they are closed under union, intersection, concatenation, substitution, inverse homomorphism, and Kleene plus.
Every recursively enumerable language L can be written as h(L) for some context-sensitive language L and some string homomorphism h.
Computational problems
The decision problem that asks whether a certain string s belongs to the language of a given context-sensitive grammar G, is PSPACE-complete. Moreover, there are context-sensitive grammars whose languages are PSPACE-complete. In other words, there is a context-sensitive grammar G such that deciding whether a certain string s belongs to the language of G is PSPACE-complete (so G is fixed and only s is part of the input of the problem).
The emptiness problem for context-sensitive grammars (given a context-sensitive grammar G, is L(G)=∅ ?) is undecidable.
As model of natural languages
Savitch has proven the following theoretical result, on which he bases his criticism of CSGs as basis for natural language: for any recursively enumerable set R, there exists a context-sensitive language/grammar G which can be used as a sort of proxy to test membership in R in the following way: given a string s, s is in R if and only if there exists a positive integer n for which scn is in G, where c is an arbitrary symbol not part of R.
It has been shown that nearly all natural languages may in general be characterized by context-sensitive grammars, but the whole class of CSGs seems to be much bigger than natural languages. Worse yet, since the aforementioned decision problem for CSGs is PSPACE-complete, that makes them totally unworkable for practical use, as a polynomial-time algorithm for a PSPACE-complete problem would imply P=NP.
It was proven that some natural languages are not context-free, based on identifying so-called cross-serial dependencies and unbounded scrambling phenomena. However this does not necessarily imply that the class of CSGs is necessary to capture "context sensitivity" in the colloquial sense of these terms in natural languages. For example, linear context-free rewriting systems (LCFRSs) are strictly weaker than CSGs but can account for the phenomenon of cross-serial dependencies; one can write a LCFRS grammar for {anbncndn | n ≥ 1} for example.
Ongoing research on computational linguistics has focused on formulating other classes of languages that are "mildly context-sensitive" whose decision problems are feasible, such as tree-adjoining grammars, combinatory categorial grammars, coupled context-free languages, and linear context-free rewriting systems. The languages generated by these formalisms properly lie between the context-free and context-sensitive languages.
More recently, the class PTIME has been identified with range concatenation grammars, which are now considered to be the most expressive of the mild-context sensitive language classes.
See also
Chomsky hierarchy
Growing context-sensitive grammar
Definite clause grammar#Non-context-free grammars
List of parser generators for context-sensitive grammars
Notes
References
Further reading
External links
Earley Parsing for Context-Sensitive Grammars
Formal languages
Grammar frameworks | Context-sensitive grammar | [
"Mathematics"
] | 2,234 | [
"Formal languages",
"Mathematical logic"
] |
6,212 | https://en.wikipedia.org/wiki/Context-sensitive%20language | In formal language theory, a context-sensitive language is a language that can be defined by a context-sensitive grammar (and equivalently by a noncontracting grammar). Context-sensitive is known as type-1 in the Chomsky hierarchy of formal languages.
Computational properties
Computationally, a context-sensitive language is equivalent to a linear bounded nondeterministic Turing machine, also called a linear bounded automaton. That is a non-deterministic Turing machine with a tape of only cells, where is the size of the input and is a constant associated with the machine. This means that every formal language that can be decided by such a machine is a context-sensitive language, and every context-sensitive language can be decided by such a machine.
This set of languages is also known as NLINSPACE or NSPACE(O(n)), because they can be accepted using linear space on a non-deterministic Turing machine. The class LINSPACE (or DSPACE(O(n))) is defined the same, except using a deterministic Turing machine. Clearly LINSPACE is a subset of NLINSPACE, but it is not known whether LINSPACE = NLINSPACE.
Examples
One of the simplest context-sensitive but not context-free languages is : the language of all strings consisting of occurrences of the symbol "a", then "b"s, then "c"s (abc, , , etc.). A superset of this language, called the Bach language, is defined as the set of all strings where "a", "b" and "c" (or any other set of three symbols) occurs equally often (, , etc.) and is also context-sensitive.
can be shown to be a context-sensitive language by constructing a linear bounded automaton which accepts . The language can easily be shown to be neither regular nor context-free by applying the respective pumping lemmas for each of the language classes to .
Similarly:
is another context-sensitive language; the corresponding context-sensitive grammar can be easily projected starting with two context-free grammars generating sentential forms in the formats
and
and then supplementing them with a permutation production like
, a new starting symbol and standard syntactic sugar.
is another context-sensitive language (the "3" in the name of this language is intended to mean a ternary alphabet); that is, the "product" operation defines a context-sensitive language (but the "sum" defines only a context-free language as the grammar and shows). Because of the commutative property of the product, the most intuitive grammar for is ambiguous. This problem can be avoided considering a somehow more restrictive definition of the language, e.g. . This can be specialized to
and, from this, to , , etc.
is a context-sensitive language. The corresponding context-sensitive grammar can be obtained as a generalization of the context-sensitive grammars for , , etc.
is a context-sensitive language.
is a context-sensitive language (the "2" in the name of this language is intended to mean a binary alphabet). This was proved by Hartmanis using pumping lemmas for regular and context-free languages over a binary alphabet and, after that, sketching a linear bounded multitape automaton accepting .
is a context-sensitive language (the "1" in the name of this language is intended to mean a unary alphabet). This was credited by A. Salomaa to Matti Soittola by means of a linear bounded automaton over a unary alphabet (pages 213-214, exercise 6.8) and also to Marti Penttonen by means of a context-sensitive grammar also over a unary alphabet (See: Formal Languages by A. Salomaa, page 14, Example 2.5).
An example of recursive language that is not context-sensitive is any recursive language whose decision is an EXPSPACE-hard problem, say, the set of pairs of equivalent regular expressions with exponentiation.
Properties of context-sensitive languages
The union, intersection, concatenation of two context-sensitive languages is context-sensitive, also the Kleene plus of a context-sensitive language is context-sensitive.
The complement of a context-sensitive language is itself context-sensitive a result known as the Immerman–Szelepcsényi theorem.
Membership of a string in a language defined by an arbitrary context-sensitive grammar, or by an arbitrary deterministic context-sensitive grammar, is a PSPACE-complete problem.
See also
Linear bounded automaton
List of parser generators for context-sensitive languages
Chomsky hierarchy
Indexed languages – a strict subset of the context-sensitive languages
Weir hierarchy
References
Sipser, M. (1996), Introduction to the Theory of Computation, PWS Publishing Co.
Formal languages | Context-sensitive language | [
"Mathematics"
] | 1,010 | [
"Formal languages",
"Mathematical logic"
] |
6,220 | https://en.wikipedia.org/wiki/Circle | A circle is a shape consisting of all points in a plane that are at a given distance from a given point, the centre. The distance between any point of the circle and the centre is called the radius. The length of a line segment connecting two points on the circle and passing through the centre is called the diameter. A circle bounds a region of the plane called a disc.
The circle has been known since before the beginning of recorded history. Natural circles are common, such as the full moon or a slice of round fruit. The circle is the basis for the wheel, which, with related inventions such as gears, makes much of modern machinery possible. In mathematics, the study of the circle has helped inspire the development of geometry, astronomy and calculus.
Terminology
Annulus: a ring-shaped object, the region bounded by two concentric circles.
Arc: any connected part of a circle. Specifying two end points of an arc and a centre allows for two arcs that together make up a full circle.
Centre: the point equidistant from all points on the circle.
Chord: a line segment whose endpoints lie on the circle, thus dividing a circle into two segments.
Circumference: the length of one circuit along the circle, or the distance around the circle.
Diameter: a line segment whose endpoints lie on the circle and that passes through the centre; or the length of such a line segment. This is the largest distance between any two points on the circle. It is a special case of a chord, namely the longest chord for a given circle, and its length is twice the length of a radius.
Disc: the region of the plane bounded by a circle. In strict mathematical usage, a circle is only the boundary of the disc (or disk), while in everyday use the term "circle" may also refer to a disc.
Lens: the region common to (the intersection of) two overlapping discs.
Radius: a line segment joining the centre of a circle with any single point on the circle itself; or the length of such a segment, which is half (the length of) a diameter. Usually, the radius is denoted and required to be a positive number. A circle with is a degenerate case consisting of a single point.
Sector: a region bounded by two radii of equal length with a common centre and either of the two possible arcs, determined by this centre and the endpoints of the radii.
Segment: a region bounded by a chord and one of the arcs connecting the chord's endpoints. The length of the chord imposes a lower boundary on the diameter of possible arcs. Sometimes the term segment is used only for regions not containing the centre of the circle to which their arc belongs.
Secant: an extended chord, a coplanar straight line, intersecting a circle in two points.
Semicircle: one of the two possible arcs determined by the endpoints of a diameter, taking its midpoint as centre. In non-technical common usage it may mean the interior of the two-dimensional region bounded by a diameter and one of its arcs, that is technically called a half-disc. A half-disc is a special case of a segment, namely the largest one.
Tangent: a coplanar straight line that has one single point in common with a circle ("touches the circle at this point").
All of the specified regions may be considered as open, that is, not containing their boundaries, or as closed, including their respective boundaries.
Etymology
The word circle derives from the Greek κίρκος/κύκλος (kirkos/kuklos), itself a metathesis of the Homeric Greek κρίκος (krikos), meaning "hoop" or "ring". The origins of the words circus and circuit are closely related.
History
Prehistoric people made stone circles and timber circles, and circular elements are common in petroglyphs and cave paintings. Disc-shaped prehistoric artifacts include the Nebra sky disc and jade discs called Bi.
The Egyptian Rhind papyrus, dated to 1700 BCE, gives a method to find the area of a circle. The result corresponds to (3.16049...) as an approximate value of .
Book 3 of Euclid's Elements deals with the properties of circles. Euclid's definition of a circle is:
In Plato's Seventh Letter there is a detailed definition and explanation of the circle. Plato explains the perfect circle, and how it is different from any drawing, words, definition or explanation. Early science, particularly geometry and astrology and astronomy, was connected to the divine for most medieval scholars, and many believed that there was something intrinsically "divine" or "perfect" that could be found in circles.
In 1880 CE, Ferdinand von Lindemann proved that is transcendental, proving that the millennia-old problem of squaring the circle cannot be performed with straightedge and compass.
With the advent of abstract art in the early 20th century, geometric objects became an artistic subject in their own right. Wassily Kandinsky in particular often used circles as an element of his compositions.
Symbolism and religious use
From the time of the earliest known civilisations – such as the Assyrians and ancient Egyptians, those in the Indus Valley and along the Yellow River in China, and the Western civilisations of ancient Greece and Rome during classical Antiquity – the circle has been used directly or indirectly in visual art to convey the artist's message and to express certain ideas.
However, differences in worldview (beliefs and culture) had a great impact on artists' perceptions. While some emphasised the circle's perimeter to demonstrate their democratic manifestation, others focused on its centre to symbolise the concept of cosmic unity. In mystical doctrines, the circle mainly symbolises the infinite and cyclical nature of existence, but in religious traditions it represents heavenly bodies and divine spirits.
The circle signifies many sacred and spiritual concepts, including unity, infinity, wholeness, the universe, divinity, balance, stability and perfection, among others. Such concepts have been conveyed in cultures worldwide through the use of symbols, for example, a compass, a halo, the vesica piscis and its derivatives (fish, eye, aureole, mandorla, etc.), the ouroboros, the Dharma wheel, a rainbow, mandalas, rose windows and so forth. Magic circles are part of some traditions of Western esotericism.
Analytic results
Circumference
The ratio of a circle's circumference to its diameter is (pi), an irrational constant approximately equal to 3.141592654. The ratio of a circle's circumference to its radius is . Thus the circumference C is related to the radius r and diameter d by:
Area enclosed
As proved by Archimedes, in his Measurement of a Circle, the area enclosed by a circle is equal to that of a triangle whose base has the length of the circle's circumference and whose height equals the circle's radius, which comes to multiplied by the radius squared:
Equivalently, denoting diameter by d,
that is, approximately 79% of the circumscribing square (whose side is of length d).
The circle is the plane curve enclosing the maximum area for a given arc length. This relates the circle to a problem in the calculus of variations, namely the isoperimetric inequality.
Radian
If a circle of radius is centred at the vertex of an angle, and that angle intercepts an arc of the circle with an arc length of , then the radian measure of the angle is the ratio of the arc length to the radius:
The circular arc is said to subtend the angle, known as the central angle, at the centre of the circle. The angle subtended by a complete circle at its centre is a complete angle, which measures radians, 360 degrees, or one turn.
Using radians, the formula for the arc length of a circular arc of radius and subtending a central angle of measure is
and the formula for the area of a circular sector of radius and with central angle of measure is
In the special case , these formulae yield the circumference of a complete circle and area of a complete disc, respectively.
Equations
Cartesian coordinates
Equation of a circle
In an x–y Cartesian coordinate system, the circle with centre coordinates (a, b) and radius r is the set of all points (x, y) such that
This equation, known as the equation of the circle, follows from the Pythagorean theorem applied to any point on the circle: as shown in the adjacent diagram, the radius is the hypotenuse of a right-angled triangle whose other sides are of length |x − a| and |y − b|. If the circle is centred at the origin (0, 0), then the equation simplifies to
One coordinate as a function of the other
The circle of radius with center at in the – plane can be broken into two semicircles each of which is the graph of a function, and , respectively:
for values of ranging from to .
Parametric form
The equation can be written in parametric form using the trigonometric functions sine and cosine as
where t is a parametric variable in the range 0 to 2, interpreted geometrically as the angle that the ray from (a, b) to (x, y) makes with the positive x axis.
An alternative parametrisation of the circle is
In this parameterisation, the ratio of t to r can be interpreted geometrically as the stereographic projection of the line passing through the centre parallel to the x axis (see Tangent half-angle substitution). However, this parameterisation works only if t is made to range not only through all reals but also to a point at infinity; otherwise, the leftmost point of the circle would be omitted.
3-point form
The equation of the circle determined by three points not on a line is obtained by a conversion of the 3-point form of a circle equation:
Homogeneous form
In homogeneous coordinates, each conic section with the equation of a circle has the form
It can be proven that a conic section is a circle exactly when it contains (when extended to the complex projective plane) the points I(1: i: 0) and J(1: −i: 0). These points are called the circular points at infinity.
Polar coordinates
In polar coordinates, the equation of a circle is
where a is the radius of the circle, are the polar coordinates of a generic point on the circle, and are the polar coordinates of the centre of the circle (i.e., r0 is the distance from the origin to the centre of the circle, and φ is the anticlockwise angle from the positive x axis to the line connecting the origin to the centre of the circle). For a circle centred on the origin, i.e. , this reduces to . When , or when the origin lies on the circle, the equation becomes
In the general case, the equation can be solved for r, giving
Without the ± sign, the equation would in some cases describe only half a circle.
Complex plane
In the complex plane, a circle with a centre at c and radius r has the equation
In parametric form, this can be written as
The slightly generalised equation
for real p, q and complex g is sometimes called a generalised circle. This becomes the above equation for a circle with , since . Not all generalised circles are actually circles: a generalised circle is either a (true) circle or a line.
Tangent lines
The tangent line through a point P on the circle is perpendicular to the diameter passing through P. If and the circle has centre (a, b) and radius r, then the tangent line is perpendicular to the line from (a, b) to (x1, y1), so it has the form . Evaluating at (x1, y1) determines the value of c, and the result is that the equation of the tangent is
or
If , then the slope of this line is
This can also be found using implicit differentiation.
When the centre of the circle is at the origin, then the equation of the tangent line becomes
and its slope is
Properties
The circle is the shape with the largest area for a given length of perimeter (see Isoperimetric inequality).
The circle is a highly symmetric shape: every line through the centre forms a line of reflection symmetry, and it has rotational symmetry around the centre for every angle. Its symmetry group is the orthogonal group O(2,R). The group of rotations alone is the circle group T.
All circles are similar.
A circle circumference and radius are proportional.
The area enclosed and the square of its radius are proportional.
The constants of proportionality are 2 and respectively.
The circle that is centred at the origin with radius 1 is called the unit circle.
Thought of as a great circle of the unit sphere, it becomes the Riemannian circle.
Through any three points, not all on the same line, there lies a unique circle. In Cartesian coordinates, it is possible to give explicit formulae for the coordinates of the centre of the circle and the radius in terms of the coordinates of the three given points. See circumcircle.
Chord
Chords are equidistant from the centre of a circle if and only if they are equal in length.
The perpendicular bisector of a chord passes through the centre of a circle; equivalent statements stemming from the uniqueness of the perpendicular bisector are:
A perpendicular line from the centre of a circle bisects the chord.
The line segment through the centre bisecting a chord is perpendicular to the chord.
If a central angle and an inscribed angle of a circle are subtended by the same chord and on the same side of the chord, then the central angle is twice the inscribed angle.
If two angles are inscribed on the same chord and on the same side of the chord, then they are equal.
If two angles are inscribed on the same chord and on opposite sides of the chord, then they are supplementary.
For a cyclic quadrilateral, the exterior angle is equal to the interior opposite angle.
An inscribed angle subtended by a diameter is a right angle (see Thales' theorem).
The diameter is the longest chord of the circle.
Among all the circles with a chord AB in common, the circle with minimal radius is the one with diameter AB.
If the intersection of any two chords divides one chord into lengths a and b and divides the other chord into lengths c and d, then .
If the intersection of any two perpendicular chords divides one chord into lengths a and b and divides the other chord into lengths c and d, then equals the square of the diameter.
The sum of the squared lengths of any two chords intersecting at right angles at a given point is the same as that of any other two perpendicular chords intersecting at the same point and is given by 8r2 − 4p2, where r is the circle radius, and p is the distance from the centre point to the point of intersection.
The distance from a point on the circle to a given chord times the diameter of the circle equals the product of the distances from the point to the ends of the chord.
Tangent
A line drawn perpendicular to a radius through the end point of the radius lying on the circle is a tangent to the circle.
A line drawn perpendicular to a tangent through the point of contact with a circle passes through the centre of the circle.
Two tangents can always be drawn to a circle from any point outside the circle, and these tangents are equal in length.
If a tangent at A and a tangent at B intersect at the exterior point P, then denoting the centre as O, the angles ∠BOA and ∠BPA are supplementary.
If AD is tangent to the circle at A and if AQ is a chord of the circle, then .
Theorems
The chord theorem states that if two chords, CD and EB, intersect at A, then .
If two secants, AE and AD, also cut the circle at B and C respectively, then (corollary of the chord theorem).
A tangent can be considered a limiting case of a secant whose ends are coincident. If a tangent from an external point A meets the circle at F and a secant from the external point A meets the circle at C and D respectively, then (tangent–secant theorem).
The angle between a chord and the tangent at one of its endpoints is equal to one half the angle subtended at the centre of the circle, on the opposite side of the chord (tangent chord angle).
If the angle subtended by the chord at the centre is 90°, then , where ℓ is the length of the chord, and r is the radius of the circle.
If two secants are inscribed in the circle as shown at right, then the measurement of angle A is equal to one half the difference of the measurements of the enclosed arcs ( and ). That is, , where O is the centre of the circle (secant–secant theorem).
Inscribed angles
An inscribed angle (examples are the blue and green angles in the figure) is exactly half the corresponding central angle (red). Hence, all inscribed angles that subtend the same arc (pink) are equal. Angles inscribed on the arc (brown) are supplementary. In particular, every inscribed angle that subtends a diameter is a right angle (since the central angle is 180°).
Sagitta
The sagitta (also known as the versine) is a line segment drawn perpendicular to a chord, between the midpoint of that chord and the arc of the circle.
Given the length y of a chord and the length x of the sagitta, the Pythagorean theorem can be used to calculate the radius of the unique circle that will fit around the two lines:
Another proof of this result, which relies only on two chord properties given above, is as follows. Given a chord of length y and with sagitta of length x, since the sagitta intersects the midpoint of the chord, we know that it is a part of a diameter of the circle. Since the diameter is twice the radius, the "missing" part of the diameter is () in length. Using the fact that one part of one chord times the other part is equal to the same product taken along a chord intersecting the first chord, we find that (. Solving for r, we find the required result.
Compass and straightedge constructions
There are many compass-and-straightedge constructions resulting in circles.
The simplest and most basic is the construction given the centre of the circle and a point on the circle. Place the fixed leg of the compass on the centre point, the movable leg on the point on the circle and rotate the compass.
Construction with given diameter
Construct the midpoint of the diameter.
Construct the circle with centre passing through one of the endpoints of the diameter (it will also pass through the other endpoint).
Construction through three noncollinear points
Name the points , and ,
Construct the perpendicular bisector of the segment .
Construct the perpendicular bisector of the segment .
Label the point of intersection of these two perpendicular bisectors . (They meet because the points are not collinear).
Construct the circle with centre passing through one of the points , or (it will also pass through the other two points).
Circle of Apollonius
Apollonius of Perga showed that a circle may also be defined as the set of points in a plane having a constant ratio (other than 1) of distances to two fixed foci, A and B. (The set of points where the distances are equal is the perpendicular bisector of segment AB, a line.) That circle is sometimes said to be drawn about two points.
The proof is in two parts. First, one must prove that, given two foci A and B and a ratio of distances, any point P satisfying the ratio of distances must fall on a particular circle. Let C be another point, also satisfying the ratio and lying on segment AB. By the angle bisector theorem the line segment PC will bisect the interior angle APB, since the segments are similar:
Analogously, a line segment PD through some point D on AB extended bisects the corresponding exterior angle BPQ where Q is on AP extended. Since the interior and exterior angles sum to 180 degrees, the angle CPD is exactly 90 degrees; that is, a right angle. The set of points P such that angle CPD is a right angle forms a circle, of which CD is a diameter.
Second, see for a proof that every point on the indicated circle satisfies the given ratio.
Cross-ratios
A closely related property of circles involves the geometry of the cross-ratio of points in the complex plane. If A, B, and C are as above, then the circle of Apollonius for these three points is the collection of points P for which the absolute value of the cross-ratio is equal to one:
Stated another way, P is a point on the circle of Apollonius if and only if the cross-ratio is on the unit circle in the complex plane.
Generalised circles
If C is the midpoint of the segment AB, then the collection of points P satisfying the Apollonius condition
is not a circle, but rather a line.
Thus, if A, B, and C are given distinct points in the plane, then the locus of points P satisfying the above equation is called a "generalised circle." It may either be a true circle or a line. In this sense a line is a generalised circle of infinite radius.
Inscription in or circumscription about other figures
In every triangle a unique circle, called the incircle, can be inscribed such that it is tangent to each of the three sides of the triangle.
About every triangle a unique circle, called the circumcircle, can be circumscribed such that it goes through each of the triangle's three vertices.
A tangential polygon, such as a tangential quadrilateral, is any convex polygon within which a circle can be inscribed that is tangent to each side of the polygon. Every regular polygon and every triangle is a tangential polygon.
A cyclic polygon is any convex polygon about which a circle can be circumscribed, passing through each vertex. A well-studied example is the cyclic quadrilateral. Every regular polygon and every triangle is a cyclic polygon. A polygon that is both cyclic and tangential is called a bicentric polygon.
A hypocycloid is a curve that is inscribed in a given circle by tracing a fixed point on a smaller circle that rolls within and tangent to the given circle.
Limiting case of other figures
The circle can be viewed as a limiting case of various other figures:
The series of regular polygons with n sides has the circle as its limit as n approaches infinity. This fact was applied by Archimedes to approximate π.
A Cartesian oval is a set of points such that a weighted sum of the distances from any of its points to two fixed points (foci) is a constant. An ellipse is the case in which the weights are equal. A circle is an ellipse with an eccentricity of zero, meaning that the two foci coincide with each other as the centre of the circle. A circle is also a different special case of a Cartesian oval in which one of the weights is zero.
A superellipse has an equation of the form for positive a, b, and n. A supercircle has . A circle is the special case of a supercircle in which .
A Cassini oval is a set of points such that the product of the distances from any of its points to two fixed points is a constant. When the two fixed points coincide, a circle results.
A curve of constant width is a figure whose width, defined as the perpendicular distance between two distinct parallel lines each intersecting its boundary in a single point, is the same regardless of the direction of those two parallel lines. The circle is the simplest example of this type of figure.
Locus of constant sum
Consider a finite set of points in the plane. The locus of points such that the sum of the squares of the distances to the given points is constant is a circle, whose centre is at the centroid of the given points.
A generalisation for higher powers of distances is obtained if under points the vertices of the regular polygon are taken. The locus of points such that the sum of the -th power of distances to the vertices of a given regular polygon with circumradius is constant is a circle, if
whose centre is the centroid of the .
In the case of the equilateral triangle, the loci of the constant sums of the second and fourth powers are circles, whereas for the square, the loci are circles for the constant sums of the second, fourth, and sixth powers. For the regular pentagon the constant sum of the eighth powers of the distances will be added and so forth.
Squaring the circle
Squaring the circle is the problem, proposed by ancient geometers, of constructing a square with the same area as a given circle by using only a finite number of steps with compass and straightedge.
In 1882, the task was proven to be impossible, as a consequence of the Lindemann–Weierstrass theorem, which proves that pi () is a transcendental number, rather than an algebraic irrational number; that is, it is not the root of any polynomial with rational coefficients. Despite the impossibility, this topic continues to be of interest for pseudomath enthusiasts.
Generalisations
In other p-norms
Defining a circle as the set of points with a fixed distance from a point, different shapes can be considered circles under different definitions of distance. In p-norm, distance is determined by
In Euclidean geometry, p = 2, giving the familiar
In taxicab geometry, p = 1. Taxicab circles are squares with sides oriented at a 45° angle to the coordinate axes. While each side would have length using a Euclidean metric, where r is the circle's radius, its length in taxicab geometry is 2r. Thus, a circle's circumference is 8r. Thus, the value of a geometric analog to is 4 in this geometry. The formula for the unit circle in taxicab geometry is in Cartesian coordinates and
in polar coordinates.
A circle of radius 1 (using this distance) is the von Neumann neighborhood of its centre.
A circle of radius r for the Chebyshev distance (L∞ metric) on a plane is also a square with side length 2r parallel to the coordinate axes, so planar Chebyshev distance can be viewed as equivalent by rotation and scaling to planar taxicab distance. However, this equivalence between L1 and L∞ metrics does not generalise to higher dimensions.
Topological definition
The circle is the one-dimensional hypersphere (the 1-sphere).
In topology, a circle is not limited to the geometric concept, but to all of its homeomorphisms. Two topological circles are equivalent if one can be transformed into the other via a deformation of R3 upon itself (known as an ambient isotopy).
Specially named circles
Apollonian circles
Archimedean circle
Archimedes' twin circles
Bankoff circle
Carlyle circle
Chromatic circle
Circle of antisimilitude
Ford circle
Geodesic circle
Johnson circles
Schoch circles
Woo circles
Of a triangle
Apollonius circle of the excircles
Brocard circle
Excircle
Incircle
Lemoine circle
Lester circle
Malfatti circles
Mandart circle
Nine-point circle
Orthocentroidal circle
Parry circle
Polar circle (geometry)
Spieker circle
Van Lamoen circle
Of certain quadrilaterals
Eight-point circle of an orthodiagonal quadrilateral
Of a conic section
Director circle
Directrix circle
Of a torus
Villarceau circles
See also
Notes
References
Further reading
External links
Elementary shapes
Conic sections
Pi | Circle | [
"Mathematics"
] | 5,837 | [
"Circles",
"Pi"
] |
6,229 | https://en.wikipedia.org/wiki/Colossus%20computer | Colossus was a set of computers developed by British codebreakers in the years 1943–1945 to help in the cryptanalysis of the Lorenz cipher. Colossus used thermionic valves (vacuum tubes) to perform Boolean and counting operations. Colossus is thus regarded as the world's first programmable, electronic, digital computer, although it was programmed by switches and plugs and not by a stored program.
Colossus was designed by General Post Office (GPO) research telephone engineer Tommy Flowers based on plans developed by mathematician Max Newman at the Government Code and Cypher School (GC&CS) at Bletchley Park.
Alan Turing's use of probability in cryptanalysis (see Banburismus) contributed to its design. It has sometimes been erroneously stated that Turing designed Colossus to aid the cryptanalysis of the Enigma. (Turing's machine that helped decode Enigma was the electromechanical Bombe, not Colossus.)
The prototype, Colossus Mark 1, was shown to be working in December 1943 and was in use at Bletchley Park by early 1944. An improved Colossus Mark 2 that used shift registers to run five times faster first worked on 1 June 1944, just in time for the Normandy landings on D-Day. Ten Colossi were in use by the end of the war and an eleventh was being commissioned. Bletchley Park's use of these machines allowed the Allies to obtain a vast amount of high-level military intelligence from intercepted radiotelegraphy messages between the German High Command (OKW) and their army commands throughout occupied Europe.
The existence of the Colossus machines was kept secret until the mid-1970s. All but two machines were dismantled into such small parts that their use could not be inferred. The two retained machines were eventually dismantled in the 1960s. In January 2024, new photos were released by GCHQ that showed re-engineered Colossus in a very different environment from the Bletchley Park buildings, presumably at GCHQ Cheltenham. A functioning reconstruction of a Mark 2 Colossus was completed in 2008 by Tony Sale and a team of volunteers; it is on display in The National Museum of Computing at Bletchley Park.
Purpose and origins
The Colossus computers were used to help decipher intercepted radio teleprinter messages that had been encrypted using an unknown device. Intelligence information revealed that the Germans called the wireless teleprinter transmission systems "Sägefisch" (sawfish). This led the British to call encrypted German teleprinter traffic "Fish", and the unknown machine and its intercepted messages "Tunny" (tunafish).
Before the Germans increased the security of their operating procedures, British cryptanalysts diagnosed how the unseen machine functioned and built an imitation of it called "British Tunny".
It was deduced that the machine had twelve wheels and used a Vernam ciphering technique on message characters in the standard 5-bit ITA2 telegraph code. It did this by combining the plaintext characters with a stream of key characters using the XOR Boolean function to produce the ciphertext.
In August 1941, a blunder by German operators led to the transmission of two versions of the same message with identical machine settings. These were intercepted and worked on at Bletchley Park. First, John Tiltman, a very talented GC&CS cryptanalyst, derived a keystream of almost 4000 characters. Then Bill Tutte, a newly arrived member of the Research Section, used this keystream to work out the logical structure of the Lorenz machine. He deduced that the twelve wheels consisted of two groups of five, which he named the χ (chi) and ψ (psi) wheels, the remaining two he called μ (mu) or "motor" wheels. The chi wheels stepped regularly with each letter that was encrypted, while the psi wheels stepped irregularly, under the control of the motor wheels.
With a sufficiently random keystream, a Vernam cipher removes the natural language property of a plaintext message of having an uneven frequency distribution of the different characters, to produce a uniform distribution in the ciphertext. The Tunny machine did this well. However, the cryptanalysts worked out that by examining the frequency distribution of the character-to-character changes in the ciphertext, instead of the plain characters, there was a departure from uniformity which provided a way into the system. This was achieved by "differencing" in which each bit or character was XOR-ed with its successor. After Germany surrendered, allied forces captured a Tunny machine and discovered that it was the electromechanical Lorenz SZ (Schlüsselzusatzgerät, cipher attachment) in-line cipher machine.
In order to decrypt the transmitted messages, two tasks had to be performed. The first was "wheel breaking", which was the discovery of the cam patterns for all the wheels. These patterns were set up on the Lorenz machine and then used for a fixed period of time for a succession of different messages. Each transmission, which often contained more than one message, was enciphered with a different start position of the wheels. Alan Turing invented a method of wheel-breaking that became known as Turingery. Turing's technique was further developed into "Rectangling", for which Colossus could produce tables for manual analysis. Colossi 2, 4, 6, 7 and 9 had a "gadget" to aid this process.
The second task was "wheel setting", which worked out the start positions of the wheels for a particular message and could only be attempted once the cam patterns were known. It was this task for which Colossus was initially designed. To discover the start position of the chi wheels for a message, Colossus compared two character streams, counting statistics from the evaluation of programmable Boolean functions. The two streams were the ciphertext, which was read at high speed from a paper tape, and the keystream, which was generated internally, in a simulation of the unknown German machine. After a succession of different Colossus runs to discover the likely chi-wheel settings, they were checked by examining the frequency distribution of the characters in the processed ciphertext. Colossus produced these frequency counts.
Decryption processes
By using differencing and knowing that the psi wheels did not advance with each character, Tutte worked out that trying just two differenced bits (impulses) of the chi-stream against the differenced ciphertext would produce a statistic that was non-random. This became known as Tutte's "1+2 break in". It involved calculating the following Boolean function:
and counting the number of times it yielded "false" (zero). If this number exceeded a pre-defined threshold value known as the "set total", it was printed out. The cryptanalyst would examine the printout to determine which of the putative start positions was most likely to be the correct one for the chi-1 and chi-2 wheels.
This technique would then be applied to other pairs of, or single, impulses to determine the likely start position of all five chi wheels. From this, the de-chi (D) of a ciphertext could be obtained, from which the psi component could be removed by manual methods. If the frequency distribution of characters in the de-chi version of the ciphertext was within certain bounds, "wheel setting" of the chi wheels was considered to have been achieved, and the message settings and de-chi were passed to the "Testery". This was the section at Bletchley Park led by Major Ralph Tester where the bulk of the decrypting work was done by manual and linguistic methods.
Colossus could also derive the start position of the psi and motor wheels. The feasibility of utilizing this additional capability regularly was made possible in the last few months of the war when there were plenty of Colossi available and the number of Tunny messages had declined.
Design and construction
Colossus was developed for the "Newmanry", the section headed by the mathematician Max Newman that was responsible for machine methods against the twelve-rotor Lorenz SZ40/42 on-line teleprinter cipher machine (code-named Tunny, for tunafish). The Colossus design arose out of a parallel project that produced a less-ambitious counting machine dubbed "Heath Robinson". Although the Heath Robinson machine proved the concept of machine analysis for this part of the process, it had serious limitations. The electro-mechanical parts were relatively slow and it was difficult to synchronise two looped paper tapes, one containing the enciphered message, and the other representing part of the keystream of the Lorenz machine. Also the tapes tended to stretch and break when being read at up to 2000 characters per second.
Tommy Flowers MBE was a senior electrical engineer and Head of the Switching Group at the Post Office Research Station at Dollis Hill. Prior to his work on Colossus, he had been involved with GC&CS at Bletchley Park from February 1941 in an attempt to improve the Bombes that were used in the cryptanalysis of the German Enigma cipher machine. He was recommended to Max Newman by Alan Turing, who had been impressed by his work on the Bombes. The main components of the Heath Robinson machine were as follows.
A tape transport and reading mechanism that ran the looped key and message tapes at between 1000 and 2000 characters per second.
A combining unit that implemented the logic of Tutte's method.
A counting unit that had been designed by C. E. Wynn-Williams of the Telecommunications Research Establishment (TRE) at Malvern, which counted the number of times the logical function returned a specified truth value.
Flowers had been brought in to design the Heath Robinson's combining unit. He was not impressed by the system of a key tape that had to be kept synchronised with the message tape and, on his own initiative, he designed an electronic machine which eliminated the need for the key tape by having an electronic analogue of the Lorenz (Tunny) machine. He presented this design to Max Newman in February 1943, but the idea that the one to two thousand thermionic valves (vacuum tubes and thyratrons) proposed, could work together reliably, was greeted with great scepticism, so more Robinsons were ordered from Dollis Hill. Flowers, however, knew from his pre-war work that most thermionic valve failures occurred as a result of the thermal stresses at power-up, so not powering a machine down reduced failure rates to very low levels. Additionally, if the heaters were started at a low voltage then slowly brought up to full voltage, thermal stress was reduced. The valves themselves could be soldered-in to avoid problems with plug-in bases, which could be unreliable. Flowers persisted with the idea and obtained support from the Director of the Research Station, W Gordon Radley.
Flowers and his team of some fifty people in the switching group spent eleven months from early February 1943 designing and building a machine that dispensed with the second tape of the Heath Robinson, by generating the wheel patterns electronically. Flowers used some of his own money for the project. This prototype, Mark 1 Colossus, contained 1,600 thermionic valves (tubes). It performed satisfactorily at Dollis Hill on 8 December 1943 and was dismantled and shipped to Bletchley Park, where it was delivered on 18 January and re-assembled by Harry Fensom and Don Horwood. It was operational in January and it successfully attacked its first message on 5 February 1944. It was a large structure and was dubbed 'Colossus'. A memo held in the National Archives written by Max Newman on 18 January 1944 records that "Colossus arrives today".
During the development of the prototype, an improved design had been developed – the Mark 2 Colossus. Four of these were ordered in March 1944 and by the end of April the number on order had been increased to twelve. Dollis Hill was put under pressure to have the first of these working by 1 June. Allen Coombs took over leadership of the production Mark 2 Colossi, the first of which – containing 2,400 valves – became operational at 08:00 on 1 June 1944, just in time for the Allied Invasion of Normandy on D-Day. Subsequently, Colossi were delivered at the rate of about one a month. By the time of V-E Day there were ten Colossi working at Bletchley Park and a start had been made on assembling an eleventh. Seven of the Colossi were used for 'wheel setting' and three for 'wheel breaking'.
The main units of the Mark 2 design were as follows.
A tape transport with an 8-photocell reading mechanism.
A six character FIFO shift register.
Twelve thyratron ring stores that simulated the Lorenz machine generating a bit-stream for each wheel.
Panels of switches for specifying the program and the "set total".
A set of functional units that performed Boolean operations.
A "span counter" that could suspend counting for part of the tape.
A master control that handled clocking, start and stop signals, counter readout and printing.
Five electronic counters.
An electric typewriter.
Most of the design of the electronics was the work of Tommy Flowers, assisted by William Chandler, Sidney Broadhurst and Allen Coombs; with Erie Speight and Arnold Lynch developing the photoelectric reading mechanism. Coombs remembered Flowers, having produced a rough draft of his design, tearing it into pieces that he handed out to his colleagues for them to do the detailed design and get their team to manufacture it. The Mark 2 Colossi were both five times faster and were simpler to operate than the prototype.
Data input to Colossus was by photoelectric reading of a paper tape transcription of the enciphered intercepted message. This was arranged in a continuous loop so that it could be read and re-read multiple times – there being no internal storage for the data. The design overcame the problem of synchronizing the electronics with the speed of the message tape by generating a clock signal from reading its sprocket holes. The speed of operation was thus limited by the mechanics of reading the tape. During development, the tape reader was tested up to 9700 characters per second (53 mph) before the tape disintegrated. So 5000 characters/second () was settled on as the speed for regular use. Flowers designed a 6-character shift register, which was used both for computing the delta function (ΔZ) and for testing five different possible starting points of Tunny's wheels in the five processors. This five-way parallelism enabled five simultaneous tests and counts to be performed giving an effective processing speed of 25,000 characters per second. The computation used algorithms devised by W. T. Tutte and colleagues to decrypt a Tunny message.
Operation
The Newmanry was staffed by cryptanalysts, operators from the Women's Royal Naval Service (WRNS) – known as "Wrens" – and engineers who were permanently on hand for maintenance and repair. By the end of the war the staffing was 272 Wrens and 27 men.
The first job in operating Colossus for a new message was to prepare the paper tape loop. This was performed by the Wrens who stuck the two ends together using Bostik glue, ensuring that there was a 150-character length of blank tape between the end and the start of the message. Using a special hand punch they inserted a start hole between the third and fourth channels sprocket holes from the end of the blank section, and a stop hole between the fourth and fifth channels sprocket holes from the end of the characters of the message. These were read by specially positioned photocells and indicated when the message was about to start and when it ended. The operator would then thread the paper tape through the gate and around the pulleys of the bedstead and adjust the tension. The two-tape bedstead design had been carried on from Heath Robinson so that one tape could be loaded whilst the previous one was being run. A switch on the Selection Panel specified the "near" or the "far" tape.
After performing various resetting and zeroizing tasks, the Wren operators would, under instruction from the cryptanalyst, operate the "set total" decade switches and the K2 panel switches to set the desired algorithm. They would then start the bedstead tape motor and lamp and, when the tape was up to speed, operate the master start switch.
Programming
Howard Campaigne, a mathematician and cryptanalyst from the US Navy's OP-20-G, wrote the following in a foreword to Flowers' 1983 paper "The Design of Colossus".
Colossus was not a stored-program computer. The input data for the five parallel processors was read from the looped message paper tape and the electronic pattern generators for the chi, psi and motor wheels. The programs for the processors were set and held on the switches and jack panel connections. Each processor could evaluate a Boolean function and count and display the number of times it yielded the specified value of "false" (0) or "true" (1) for each pass of the message tape.
Input to the processors came from two sources, the shift registers from tape reading and the thyratron rings that emulated the wheels of the Tunny machine. The characters on the paper tape were called Z and the characters from the Tunny emulator were referred to by the Greek letters that Bill Tutte had given them when working out the logical structure of the machine. On the selection panel, switches specified either Z or ΔZ, either or Δ and either or Δ for the data to be passed to the jack field and 'K2 switch panel'. These signals from the wheel simulators could be specified as stepping on with each new pass of the message tape or not.
The K2 switch panel had a group of switches on the left-hand side to specify the algorithm. The switches on the right-hand side selected the counter to which the result was fed. The plugboard allowed less specialized conditions to be imposed. Overall the K2 switch panel switches and the plugboard allowed about five billion different combinations of the selected variables.
As an example: a set of runs for a message tape might initially involve two chi wheels, as in Tutte's 1+2 algorithm. Such a two-wheel run was called a long run, taking on average eight minutes unless the parallelism was utilised to cut the time by a factor of five. The subsequent runs might only involve setting one chi wheel, giving a short run taking about two minutes. Initially, after the initial long run, the choice of the next algorithm to be tried was specified by the cryptanalyst. Experience showed, however, that decision trees for this iterative process could be produced for use by the Wren operators in a proportion of cases.
Influence and fate
Although the Colossus was the first of the electronic digital machines with programmability, albeit limited by modern standards, it was not a general-purpose machine, being designed for a range of cryptanalytic tasks, most involving counting the results of evaluating Boolean algorithms.
A Colossus computer was thus not a fully Turing complete machine. However, University of San Francisco professor Benjamin Wells has shown that if all ten Colossus machines made were rearranged in a specific cluster, then the entire set of computers could have simulated a universal Turing machine, and thus be Turing complete.
Colossus and the reasons for its construction were highly secret and remained so for 30 years after the War. Consequently, it was not included in the history of computing hardware for many years, and Flowers and his associates were deprived of the recognition they were due. All but two of the Colossi were dismantled after the war and parts returned to the Post Office. Some parts, sanitised as to their original purpose, were taken to Max Newman's Royal Society Computing Machine Laboratory at Manchester University. Two Colossi, along with two Tunny machines, were retained and moved to GCHQ's new headquarters at Eastcote in April 1946, and then to Cheltenham between 1952 and 1954. One of the Colossi, known as Colossus Blue, was dismantled in 1959; the other in the 1960s. Tommy Flowers was ordered to destroy all documentation. He duly burnt them in a furnace and later said of that order:
The Colossi were adapted for other purposes, with varying degrees of success; in their later years they were used for training. Jack Good related how he was the first to use Colossus after the war, persuading the US National Security Agency that it could be used to perform a function for which they were planning to build a special-purpose machine. Colossus was also used to perform character counts on one-time pad tape to test for non-randomness.
A small number of people who were associated with Colossus—and knew that large-scale, reliable, high-speed electronic digital computing devices were feasible—played significant roles in early computer work in the UK and probably in the US. However, being so secret, it had little direct influence on the development of later computers; it was EDVAC that was the seminal computer architecture of the time. In 1972, Herman Goldstine, who was unaware of Colossus and its legacy to the projects of people such as Alan Turing (ACE), Max Newman (Manchester computers) and Harry Huskey (Bendix G-15), wrote that,
Professor Brian Randell, who unearthed information about Colossus in the 1970s, commented on this, saying that:
Randell's efforts started to bear fruit in the mid-1970s. The secrecy about Bletchley Park had been broken when Group Captain Winterbotham published his book The Ultra Secret in 1974. Randell was researching the history of computer science in Britain for a conference on the history of computing held at the Los Alamos Scientific Laboratory, New Mexico on 10–15 June 1976, and got permission to present a paper on wartime development of the COLOSSI at the Post Office Research Station, Dollis Hill (in October 1975 the British Government had released a series of captioned photographs from the Public Record Office). The interest in the "revelations" in his paper resulted in a special evening meeting when Randell and Coombs answered further questions. Coombs later wrote that no member of our team could ever forget the fellowship, the sense of purpose and, above all, the breathless excitement of those days. In 1977 Randell published an article The First Electronic Computer in several journals.
In October 2000, a 500-page technical report on the Tunny cipher and its cryptanalysis—entitled General Report on Tunny—was released by GCHQ to the national Public Record Office, and it contains a fascinating paean to Colossus by the cryptographers who worked with it:
Reconstruction
A team led by Tony Sale built a fully functional reconstruction of a Colossus Mark 2 between 1993 and 2008. In spite of the blueprints and hardware being destroyed, a surprising amount of material had survived, mainly in engineers' notebooks, but a considerable amount of it in the U.S. The optical tape reader might have posed the biggest problem, but Dr. Arnold Lynch, its original designer was able to redesign it to his own original specification. The reconstruction is on display, in the historically correct place for Colossus No. 9, at The National Museum of Computing, in H Block Bletchley Park in Milton Keynes, Buckinghamshire.
In November 2007, to celebrate the project completion and to mark the start of a fundraising initiative for The National Museum of Computing, a Cipher Challenge pitted the rebuilt Colossus against radio amateurs worldwide in being first to receive and decode three messages enciphered using the Lorenz SZ42 and transmitted from radio station DL0HNF in the Heinz Nixdorf MuseumsForum computer museum. The challenge was easily won by radio amateur Joachim Schüth, who had carefully prepared for the event and developed his own signal processing and code-breaking code using Ada. The Colossus team were hampered by their wish to use World War II radio equipment, delaying them by a day because of poor reception conditions. Nevertheless, the victor's 1.4 GHz laptop, running his own code, took less than a minute to find the settings for all 12 wheels. The German codebreaker said: "My laptop digested ciphertext at a speed of 1.2 million characters per second—240 times faster than Colossus. If you scale the CPU frequency by that factor, you get an equivalent clock of 5.8 MHz for Colossus. That is a remarkable speed for a computer built in 1944."
The Cipher Challenge verified the successful completion of the rebuilding project. "On the strength of today's performance Colossus is as good as it was six decades ago", commented Tony Sale. "We are delighted to have produced a fitting tribute to the people who worked at Bletchley Park and whose brainpower devised these fantastic machines which broke these ciphers and shortened the war by many months."
Other meanings
There was a fictional computer named Colossus in the 1970 film Colossus: The Forbin Project which was based on the 1966 novel Colossus by D. F. Jones. This was a coincidence as it pre-dates the public release of information about Colossus, or even its name.
Neal Stephenson's novel Cryptonomicon (1999) also contains a fictional treatment of the historical role played by Turing and Bletchley Park.
See also
History of computing hardware
List of vacuum-tube computers
Manchester Baby
Z3
Z4
Footnotes
References
in
in
in
in
in
in
Updated and extended version of Action This Day: From Breaking of the Enigma Code to the Birth of the Modern Computer Bantam Press 2001
in
in
in
describes the operation of Colossus in breaking Tunny messages
in
in
Further reading
A short film made by Google to celebrate Colossus and those who built it, in particular Tommy Flowers.
– A detailed description of the cryptanalysis of Tunny, and some details of Colossus (contains some minor errors)
A guided tour of the history and geography of the Park, written by one of the founder members of the Bletchley Park Trust
– Comparison of the first computers, with a chapter about Colossus and its reconstruction by Tony Sale.
A slender (20-page) booklet, containing the same material as Tony Sale's website (see below)
External links
Early computer development
The National Museum of Computing (TNMOC)
TNMOC: The 75th anniversary of the first attack
Tony Sale's Codes and Ciphers Contains a great deal of information, including:
Colossus, the revolution in code breaking
Lorenz Cipher and the Colossus
The machine age comes to Fish codebreaking
The Colossus Rebuild Project
The Colossus Rebuild Project: Evolving to the Colossus Mk 2
Walk around Colossus A detailed tour of the replica Colossus – make sure to click on the "More Text" links on each image to see the informative detailed text about that part of Colossus
IEEE lecture – Transcript of a lecture Tony Sale gave describing the reconstruction project
Brian Randell's 1976 lecture on the Colossus
BBC news article reporting on the replica Colossus
BBC news article: "Colossus cracks codes once more"
BBC news article: BBC news article: "Bletchley's code-cracking Colossus" with video interviews 2010-02-02
Website on Copeland's 2006 book with much information and links to recently declassified information
Was the Manchester Baby conceived at Bletchley Park?
online virtual simulation of Colossus
1940s computers
Early British computers
Bletchley Park
Cryptanalytic devices
Military computers
Vacuum tube computers
World War II British electronics
English inventions
Computer-related introductions in 1943
Computer-related introductions in 1944
Computer-related introductions in 1945
Serial computers | Colossus computer | [
"Technology"
] | 5,814 | [
"Serial computers",
"Computers"
] |
6,233 | https://en.wikipedia.org/wiki/Connected%20space | In topology and related branches of mathematics, a connected space is a topological space that cannot be represented as the union of two or more disjoint non-empty open subsets. Connectedness is one of the principal topological properties that are used to distinguish topological spaces.
A subset of a topological space is a if it is a connected space when viewed as a subspace of .
Some related but stronger conditions are path connected, simply connected, and -connected. Another related notion is locally connected, which neither implies nor follows from connectedness.
Formal definition
A topological space is said to be if it is the union of two disjoint non-empty open sets. Otherwise, is said to be connected. A subset of a topological space is said to be connected if it is connected under its subspace topology. Some authors exclude the empty set (with its unique topology) as a connected space, but this article does not follow that practice.
For a topological space the following conditions are equivalent:
is connected, that is, it cannot be divided into two disjoint non-empty open sets.
The only subsets of which are both open and closed (clopen sets) are and the empty set.
The only subsets of with empty boundary are and the empty set.
cannot be written as the union of two non-empty separated sets (sets for which each is disjoint from the other's closure).
All continuous functions from to are constant, where is the two-point space endowed with the discrete topology.
Historically this modern formulation of the notion of connectedness (in terms of no partition of into two separated sets) first appeared (independently) with N.J. Lennes, Frigyes Riesz, and Felix Hausdorff at the beginning of the 20th century. See for details.
Connected components
Given some point in a topological space the union of any collection of connected subsets such that each contains will once again be a connected subset.
The connected component of a point in is the union of all connected subsets of that contain it is the unique largest (with respect to ) connected subset of that contains
The maximal connected subsets (ordered by inclusion ) of a non-empty topological space are called the connected components of the space.
The components of any topological space form a partition of : they are disjoint, non-empty and their union is the whole space.
Every component is a closed subset of the original space. It follows that, in the case where their number is finite, each component is also an open subset. However, if their number is infinite, this might not be the case; for instance, the connected components of the set of the rational numbers are the one-point sets (singletons), which are not open. Proof: Any two distinct rational numbers are in different components. Take an irrational number and then set and Then is a separation of and . Thus each component is a one-point set.
Let be the connected component of in a topological space and be the intersection of all clopen sets containing (called quasi-component of ). Then where the equality holds if is compact Hausdorff or locally connected.
Disconnected spaces
A space in which all components are one-point sets is called . Related to this property, a space is called if, for any two distinct elements and of , there exist disjoint open sets containing and containing such that is the union of and . Clearly, any totally separated space is totally disconnected, but the converse does not hold. For example, take two copies of the rational numbers , and identify them at every point except zero. The resulting space, with the quotient topology, is totally disconnected. However, by considering the two copies of zero, one sees that the space is not totally separated. In fact, it is not even Hausdorff, and the condition of being totally separated is strictly stronger than the condition of being Hausdorff.
Examples
The closed interval in the standard subspace topology is connected; although it can, for example, be written as the union of and the second set is not open in the chosen topology of
The union of and is disconnected; both of these intervals are open in the standard topological space
is disconnected.
A convex subset of is connected; it is actually simply connected.
A Euclidean plane excluding the origin, is connected, but is not simply connected. The three-dimensional Euclidean space without the origin is connected, and even simply connected. In contrast, the one-dimensional Euclidean space without the origin is not connected.
A Euclidean plane with a straight line removed is not connected since it consists of two half-planes.
, the space of real numbers with the usual topology, is connected.
The Sorgenfrey line is disconnected.
If even a single point is removed from , the remainder is disconnected. However, if even a countable infinity of points are removed from , where the remainder is connected. If , then remains simply connected after removal of countably many points.
Any topological vector space, e.g. any Hilbert space or Banach space, over a connected field (such as or ), is simply connected.
Every discrete topological space with at least two elements is disconnected, in fact such a space is totally disconnected. The simplest example is the discrete two-point space.
On the other hand, a finite set might be connected. For example, the spectrum of a discrete valuation ring consists of two points and is connected. It is an example of a Sierpiński space.
The Cantor set is totally disconnected; since the set contains uncountably many points, it has uncountably many components.
If a space is homotopy equivalent to a connected space, then is itself connected.
The topologist's sine curve is an example of a set that is connected but is neither path connected nor locally connected.
The general linear group (that is, the group of -by- real, invertible matrices) consists of two connected components: the one with matrices of positive determinant and the other of negative determinant. In particular, it is not connected. In contrast, is connected. More generally, the set of invertible bounded operators on a complex Hilbert space is connected.
The spectra of commutative local ring and integral domains are connected. More generally, the following are equivalent
The spectrum of a commutative ring is connected
Every finitely generated projective module over has constant rank.
has no idempotent (i.e., is not a product of two rings in a nontrivial way).
An example of a space that is not connected is a plane with an infinite line deleted from it. Other examples of disconnected spaces (that is, spaces which are not connected) include the plane with an annulus removed, as well as the union of two disjoint closed disks, where all examples of this paragraph bear the subspace topology induced by two-dimensional Euclidean space.
Path connectedness
A is a stronger notion of connectedness, requiring the structure of a path. A path from a point to a point in a topological space is a continuous function from the unit interval to with and . A of is an equivalence class of under the equivalence relation which makes equivalent to if and only if there is a path from to . The space is said to be path-connected (or pathwise connected or -connected) if there is exactly one path-component. For non-empty spaces, this is equivalent to the statement that there is a path joining any two points in . Again, many authors exclude the empty space.
Every path-connected space is connected. The converse is not always true: examples of connected spaces that are not path-connected include the extended long line and the topologist's sine curve.
Subsets of the real line are connected if and only if they are path-connected; these subsets are the intervals and rays of .
Also, open subsets of or are connected if and only if they are path-connected.
Additionally, connectedness and path-connectedness are the same for finite topological spaces.
Arc connectedness
A space is said to be arc-connected or arcwise connected if any two topologically distinguishable points can be joined by an arc, which is an embedding . An arc-component of is a maximal arc-connected subset of ; or equivalently an equivalence class of the equivalence relation of whether two points can be joined by an arc or by a path whose points are topologically indistinguishable.
Every Hausdorff space that is path-connected is also arc-connected; more generally this is true for a -Hausdorff space, which is a space where each image of a path is closed. An example of a space which is path-connected but not arc-connected is given by the line with two origins; its two copies of can be connected by a path but not by an arc.
Intuition for path-connected spaces does not readily transfer to arc-connected spaces. Let be the line with two origins. The following are facts whose analogues hold for path-connected spaces, but do not hold for arc-connected spaces:
Continuous image of arc-connected space may not be arc-connected: for example, a quotient map from an arc-connected space to its quotient with countably many (at least 2) topologically distinguishable points cannot be arc-connected due to too small cardinality.
Arc-components may not be disjoint. For example, has two overlapping arc-components.
Arc-connected product space may not be a product of arc-connected spaces. For example, is arc-connected, but is not.
Arc-components of a product space may not be products of arc-components of the marginal spaces. For example, has a single arc-component, but has two arc-components.
If arc-connected subsets have a non-empty intersection, then their union may not be arc-connected. For example, the arc-components of intersect, but their union is not arc-connected.
Local connectedness
A topological space is said to be locally connected at a point if every neighbourhood of contains a connected open neighbourhood. It is locally connected if it has a base of connected sets. It can be shown that a space is locally connected if and only if every component of every open set of is open.
Similarly, a topological space is said to be if it has a base of path-connected sets.
An open subset of a locally path-connected space is connected if and only if it is path-connected.
This generalizes the earlier statement about and , each of which is locally path-connected. More generally, any topological manifold is locally path-connected.
Locally connected does not imply connected, nor does locally path-connected imply path connected. A simple example of a locally connected (and locally path-connected) space that is not connected (or path-connected) is the union of two separated intervals in , such as .
A classical example of a connected space that is not locally connected is the so-called topologist's sine curve, defined as , with the Euclidean topology induced by inclusion in .
Set operations
The intersection of connected sets is not necessarily connected.
The union of connected sets is not necessarily connected, as can be seen by considering .
Each ellipse is a connected set, but the union is not connected, since it can be partitioned to two disjoint open sets and .
This means that, if the union is disconnected, then the collection can be partitioned to two sub-collections, such that the unions of the sub-collections are disjoint and open in (see picture). This implies that in several cases, a union of connected sets necessarily connected. In particular:
If the common intersection of all sets is not empty (), then obviously they cannot be partitioned to collections with disjoint unions. Hence the union of connected sets with non-empty intersection is connected.
If the intersection of each pair of sets is not empty () then again they cannot be partitioned to collections with disjoint unions, so their union must be connected.
If the sets can be ordered as a "linked chain", i.e. indexed by integer indices and , then again their union must be connected.
If the sets are pairwise-disjoint and the quotient space is connected, then must be connected. Otherwise, if is a separation of then is a separation of the quotient space (since are disjoint and open in the quotient space).
The set difference of connected sets is not necessarily connected. However, if and their difference is disconnected (and thus can be written as a union of two open sets and ), then the union of with each such component is connected (i.e. is connected for all ).
Theorems
Main theorem of connectedness: Let and be topological spaces and let be a continuous function. If is (path-)connected then the image is (path-)connected. This result can be considered a generalization of the intermediate value theorem.
Every path-connected space is connected.
In a locally path-connected space, every open connected set is path-connected.
Every locally path-connected space is locally connected.
A locally path-connected space is path-connected if and only if it is connected.
The closure of a connected subset is connected. Furthermore, any subset between a connected subset and its closure is connected.
The connected components are always closed (but in general not open)
The connected components of a locally connected space are also open.
The connected components of a space are disjoint unions of the path-connected components (which in general are neither open nor closed).
Every quotient of a connected (resp. locally connected, path-connected, locally path-connected) space is connected (resp. locally connected, path-connected, locally path-connected).
Every product of a family of connected (resp. path-connected) spaces is connected (resp. path-connected).
Every open subset of a locally connected (resp. locally path-connected) space is locally connected (resp. locally path-connected).
Every manifold is locally path-connected.
Arc-wise connected space is path connected, but path-wise connected space may not be arc-wise connected
Continuous image of arc-wise connected set is arc-wise connected.
Graphs
Graphs have path connected subsets, namely those subsets for which every pair of points has a path of edges joining them.
But it is not always possible to find a topology on the set of points which induces the same connected sets. The 5-cycle graph (and any -cycle with odd) is one such example.
As a consequence, a notion of connectedness can be formulated independently of the topology on a space. To wit, there is a category of connective spaces consisting of sets with collections of connected subsets satisfying connectivity axioms; their morphisms are those functions which map connected sets to connected sets . Topological spaces and graphs are special cases of connective spaces; indeed, the finite connective spaces are precisely the finite graphs.
However, every graph can be canonically made into a topological space, by treating vertices as points and edges as copies of the unit interval (see topological graph theory#Graphs as topological spaces). Then one can show that the graph is connected (in the graph theoretical sense) if and only if it is connected as a topological space.
Stronger forms of connectedness
There are stronger forms of connectedness for topological spaces, for instance:
If there exist no two disjoint non-empty open sets in a topological space , must be connected, and thus hyperconnected spaces are also connected.
Since a simply connected space is, by definition, also required to be path connected, any simply connected space is also connected. If the "path connectedness" requirement is dropped from the definition of simple connectivity, a simply connected space does not need to be connected.
Yet stronger versions of connectivity include the notion of a contractible space. Every contractible space is path connected and thus also connected.
In general, any path connected space must be connected but there exist connected spaces that are not path connected. The deleted comb space furnishes such an example, as does the above-mentioned topologist's sine curve.
See also
References
Further reading
.
General topology
Properties of topological spaces | Connected space | [
"Mathematics"
] | 3,354 | [
"General topology",
"Properties of topological spaces",
"Space (mathematics)",
"Topological spaces",
"Topology"
] |
6,239 | https://en.wikipedia.org/wiki/Contraction%20mapping | In mathematics, a contraction mapping, or contraction or contractor, on a metric space (M, d) is a function f from M to itself, with the property that there is some real number such that for all x and y in M,
The smallest such value of k is called the Lipschitz constant of f. Contractive maps are sometimes called Lipschitzian maps. If the above condition is instead satisfied for
k ≤ 1, then the mapping is said to be a non-expansive map.
More generally, the idea of a contractive mapping can be defined for maps between metric spaces. Thus, if (M, d) and (N, d) are two metric spaces, then is a contractive mapping if there is a constant such that
for all x and y in M.
Every contraction mapping is Lipschitz continuous and hence uniformly continuous (for a Lipschitz continuous function, the constant k is no longer necessarily less than 1).
A contraction mapping has at most one fixed point. Moreover, the Banach fixed-point theorem states that every contraction mapping on a non-empty complete metric space has a unique fixed point, and that for any x in M the iterated function sequence x, f (x), f (f (x)), f (f (f (x))), ... converges to the fixed point. This concept is very useful for iterated function systems where contraction mappings are often used. Banach's fixed-point theorem is also applied in proving the existence of solutions of ordinary differential equations, and is used in one proof of the inverse function theorem.
Contraction mappings play an important role in dynamic programming problems.
Firmly non-expansive mapping
A non-expansive mapping with can be generalized to a firmly non-expansive mapping in a Hilbert space if the following holds for all x and y in :
where
.
This is a special case of averaged nonexpansive operators with . A firmly non-expansive mapping is always non-expansive, via the Cauchy–Schwarz inequality.
The class of firmly non-expansive maps is closed under convex combinations, but not compositions. This class includes proximal mappings of proper, convex, lower-semicontinuous functions, hence it also includes orthogonal projections onto non-empty closed convex sets. The class of firmly nonexpansive operators is equal to the set of resolvents of maximally monotone operators. Surprisingly, while iterating non-expansive maps has no guarantee to find a fixed point (e.g. multiplication by -1), firm non-expansiveness is sufficient to guarantee global convergence to a fixed point, provided a fixed point exists. More precisely, if , then for any initial point , iterating
yields convergence to a fixed point . This convergence might be weak in an infinite-dimensional setting.
Subcontraction map
A subcontraction map or subcontractor''' is a map f on a metric space (M, d) such that
If the image of a subcontractor f is compact, then f has a fixed point.
Locally convex spaces
In a locally convex space (E, P) with topology given by a set P of seminorms, one can define for any p ∈ P a p-contraction as a map f such that there is some kp < 1 such that ≤ . If f is a p-contraction for all p ∈ P and (E, P) is sequentially complete, then f has a fixed point, given as limit of any sequence xn+1 = f(xn), and if (E, P'') is Hausdorff, then the fixed point is unique.
See also
Short map
Contraction (operator theory)
Transformation
Comparametric equation
Blackwell's contraction mapping theorem
CLRg property
References
Further reading
provides an undergraduate level introduction.
Fixed points (mathematics)
Metric geometry | Contraction mapping | [
"Mathematics"
] | 795 | [
"Fixed points (mathematics)",
"Mathematical analysis",
"Topology",
"Dynamical systems"
] |
6,246 | https://en.wikipedia.org/wiki/Covalent%20bond | A covalent bond is a chemical bond that involves the sharing of electrons to form electron pairs between atoms. These electron pairs are known as shared pairs or bonding pairs. The stable balance of attractive and repulsive forces between atoms, when they share electrons, is known as covalent bonding. For many molecules, the sharing of electrons allows each atom to attain the equivalent of a full valence shell, corresponding to a stable electronic configuration. In organic chemistry, covalent bonding is much more common than ionic bonding.
Covalent bonding also includes many kinds of interactions, including σ-bonding, π-bonding, metal-to-metal bonding, agostic interactions, bent bonds, three-center two-electron bonds and three-center four-electron bonds. The term covalent bond dates from 1939. The prefix co- means jointly, associated in action, partnered to a lesser degree, etc.; thus a "co-valent bond", in essence, means that the atoms share "valence", such as is discussed in valence bond theory.
In the molecule , the hydrogen atoms share the two electrons via covalent bonding. Covalency is greatest between atoms of similar electronegativities. Thus, covalent bonding does not necessarily require that the two atoms be of the same elements, only that they be of comparable electronegativity. Covalent bonding that entails the sharing of electrons over more than two atoms is said to be delocalized.
History
The term covalence in regard to bonding was first used in 1919 by Irving Langmuir in a Journal of the American Chemical Society article entitled "The Arrangement of Electrons in Atoms and Molecules". Langmuir wrote that "we shall denote by the term covalence the number of pairs of electrons that a given atom shares with its neighbors."
The idea of covalent bonding can be traced several years before 1919 to Gilbert N. Lewis, who in 1916 described the sharing of electron pairs between atoms (and in 1926 he also coined the term "photon" for the smallest unit of radiant energy). He introduced the Lewis notation or electron dot notation or Lewis dot structure, in which valence electrons (those in the outer shell) are represented as dots around the atomic symbols. Pairs of electrons located between atoms represent covalent bonds. Multiple pairs represent multiple bonds, such as double bonds and triple bonds. An alternative form of representation, not shown here, has bond-forming electron pairs represented as solid lines.
Lewis proposed that an atom forms enough covalent bonds to form a full (or closed) outer electron shell. In the diagram of methane shown here, the carbon atom has a valence of four and is, therefore, surrounded by eight electrons (the octet rule), four from the carbon itself and four from the hydrogens bonded to it. Each hydrogen has a valence of one and is surrounded by two electrons (a duet rule) – its own one electron plus one from the carbon. The numbers of electrons correspond to full shells in the quantum theory of the atom; the outer shell of a carbon atom is the n = 2 shell, which can hold eight electrons, whereas the outer (and only) shell of a hydrogen atom is the n = 1 shell, which can hold only two.
While the idea of shared electron pairs provides an effective qualitative picture of covalent bonding, quantum mechanics is needed to understand the nature of these bonds and predict the structures and properties of simple molecules. Walter Heitler and Fritz London are credited with the first successful quantum mechanical explanation of a chemical bond (molecular hydrogen) in 1927. Their work was based on the valence bond model, which assumes that a chemical bond is formed when there is good overlap between the atomic orbitals of participating atoms.
Types of covalent bonds
Atomic orbitals (except for s orbitals) have specific directional properties leading to different types of covalent bonds. Sigma (σ) bonds are the strongest covalent bonds and are due to head-on overlapping of orbitals on two different atoms. A single bond is usually a σ bond. Pi (π) bonds are weaker and are due to lateral overlap between p (or d) orbitals. A double bond between two given atoms consists of one σ and one π bond, and a triple bond is one σ and two π bonds.
Covalent bonds are also affected by the electronegativity of the connected atoms which determines the chemical polarity of the bond. Two atoms with equal electronegativity will make nonpolar covalent bonds such as H–H. An unequal relationship creates a polar covalent bond such as with H−Cl. However polarity also requires geometric asymmetry, or else dipoles may cancel out, resulting in a non-polar molecule.
Covalent structures
There are several types of structures for covalent substances, including individual molecules, molecular structures, macromolecular structures and giant covalent structures. Individual molecules have strong bonds that hold the atoms together, but generally, there are negligible forces of attraction between molecules. Such covalent substances are usually gases, for example, HCl, SO2, CO2, and CH4. In molecular structures, there are weak forces of attraction. Such covalent substances are low-boiling-temperature liquids (such as ethanol), and low-melting-temperature solids (such as iodine and solid CO2). Macromolecular structures have large numbers of atoms linked by covalent bonds in chains, including synthetic polymers such as polyethylene and nylon, and biopolymers such as proteins and starch. Network covalent structures (or giant covalent structures) contain large numbers of atoms linked in sheets (such as graphite), or 3-dimensional structures (such as diamond and quartz). These substances have high melting and boiling points, are frequently brittle, and tend to have high electrical resistivity. Elements that have high electronegativity, and the ability to form three or four electron pair bonds, often form such large macromolecular structures.
One- and three-electron bonds
Bonds with one or three electrons can be found in radical species, which have an odd number of electrons. The simplest example of a 1-electron bond is found in the dihydrogen cation, . One-electron bonds often have about half the bond energy of a 2-electron bond, and are therefore called "half bonds". However, there are exceptions: in the case of dilithium, the bond is actually stronger for the 1-electron than for the 2-electron Li2. This exception can be explained in terms of hybridization and inner-shell effects.
The simplest example of three-electron bonding can be found in the helium dimer cation, . It is considered a "half bond" because it consists of only one shared electron (rather than two); in molecular orbital terms, the third electron is in an anti-bonding orbital which cancels out half of the bond formed by the other two electrons. Another example of a molecule containing a 3-electron bond, in addition to two 2-electron bonds, is nitric oxide, NO. The oxygen molecule, O2 can also be regarded as having two 3-electron bonds and one 2-electron bond, which accounts for its paramagnetism and its formal bond order of 2. Chlorine dioxide and its heavier analogues bromine dioxide and iodine dioxide also contain three-electron bonds.
Molecules with odd-electron bonds are usually highly reactive. These types of bond are only stable between atoms with similar electronegativities.
Resonance
There are situations whereby a single Lewis structure is insufficient to explain the electron configuration in a molecule and its resulting experimentally-determined properties, hence a superposition of structures is needed. The same two atoms in such molecules can be bonded differently in different Lewis structures (a single bond in one, a double bond in another, or even none at all), resulting in a non-integer bond order. The nitrate ion is one such example with three equivalent structures. The bond between the nitrogen and each oxygen is a double bond in one structure and a single bond in the other two, so that the average bond order for each N–O interaction is = .
Aromaticity
In organic chemistry, when a molecule with a planar ring obeys Hückel's rule, where the number of π electrons fit the formula 4n + 2 (where n is an integer), it attains extra stability and symmetry. In benzene, the prototypical aromatic compound, there are 6 π bonding electrons (n = 1, 4n + 2 = 6). These occupy three delocalized π molecular orbitals (molecular orbital theory) or form conjugate π bonds in two resonance structures that linearly combine (valence bond theory), creating a regular hexagon exhibiting a greater stabilization than the hypothetical 1,3,5-cyclohexatriene.
In the case of heterocyclic aromatics and substituted benzenes, the electronegativity differences between different parts of the ring may dominate the chemical behavior of aromatic ring bonds, which otherwise are equivalent.
Hypervalence
Certain molecules such as xenon difluoride and sulfur hexafluoride have higher coordination numbers than would be possible due to strictly covalent bonding according to the octet rule. This is explained by the three-center four-electron bond ("3c–4e") model which interprets the molecular wavefunction in terms of non-bonding highest occupied molecular orbitals in molecular orbital theory and resonance of sigma bonds in valence bond theory.
Electron deficiency
In three-center two-electron bonds ("3c–2e") three atoms share two electrons in bonding. This type of bonding occurs in boron hydrides such as diborane (B2H6), which are often described as electron deficient because there are not enough valence electrons to form localized (2-centre 2-electron) bonds joining all the atoms. However, the more modern description using 3c–2e bonds does provide enough bonding orbitals to connect all the atoms so that the molecules can instead be classified as electron-precise.
Each such bond (2 per molecule in diborane) contains a pair of electrons which connect the boron atoms to each other in a banana shape, with a proton (the nucleus of a hydrogen atom) in the middle of the bond, sharing electrons with both boron atoms. In certain cluster compounds, so-called four-center two-electron bonds also have been postulated.
Quantum mechanical description
After the development of quantum mechanics, two basic theories were proposed to provide a quantum description of chemical bonding: valence bond (VB) theory and molecular orbital (MO) theory. A more recent quantum description is given in terms of atomic contributions to the electronic density of states.
Comparison of VB and MO theories
The two theories represent two ways to build up the electron configuration of the molecule. For valence bond theory, the atomic hybrid orbitals are filled with electrons first to produce a fully bonded valence configuration, followed by performing a linear combination of contributing structures (resonance) if there are several of them. In contrast, for molecular orbital theory, a linear combination of atomic orbitals is performed first, followed by filling of the resulting molecular orbitals with electrons.
The two approaches are regarded as complementary, and each provides its own insights into the problem of chemical bonding. As valence bond theory builds the molecular wavefunction out of localized bonds, it is more suited for the calculation of bond energies and the understanding of reaction mechanisms. As molecular orbital theory builds the molecular wavefunction out of delocalized orbitals, it is more suited for the calculation of ionization energies and the understanding of spectral absorption bands.
At the qualitative level, both theories contain incorrect predictions. Simple (Heitler–London) valence bond theory correctly predicts the dissociation of homonuclear diatomic molecules into separate atoms, while simple (Hartree–Fock) molecular orbital theory incorrectly predicts dissociation into a mixture of atoms and ions. On the other hand, simple molecular orbital theory correctly predicts Hückel's rule of aromaticity, while simple valence bond theory incorrectly predicts that cyclobutadiene has larger resonance energy than benzene.
Although the wavefunctions generated by both theories at the qualitative level do not agree and do not match the stabilization energy by experiment, they can be corrected by configuration interaction. This is done by combining the valence bond covalent function with the functions describing all possible ionic structures or by combining the molecular orbital ground state function with the functions describing all possible excited states using unoccupied orbitals. It can then be seen that the simple molecular orbital approach overestimates the weight of the ionic structures while the simple valence bond approach neglects them. This can also be described as saying that the simple molecular orbital approach neglects electron correlation while the simple valence bond approach overestimates it.
Modern calculations in quantum chemistry usually start from (but ultimately go far beyond) a molecular orbital rather than a valence bond approach, not because of any intrinsic superiority in the former but rather because the MO approach is more readily adapted to numerical computations. Molecular orbitals are orthogonal, which significantly increases the feasibility and speed of computer calculations compared to nonorthogonal valence bond orbitals.
Covalency from atomic contribution to the electronic density of states
Evaluation of bond covalency is dependent on the basis set for approximate quantum-chemical methods such as COOP (crystal orbital overlap population), COHP (Crystal orbital Hamilton population), and BCOOP (Balanced crystal orbital overlap population). To overcome this issue, an alternative formulation of the bond covalency can be provided in this way.
The mass center of an atomic orbital with quantum numbers for atom A is defined as
where is the contribution of the atomic orbital of the atom A to the total electronic density of states of the solid
where the outer sum runs over all atoms A of the unit cell. The energy window is chosen in such a way that it encompasses all of the relevant bands participating in the bond. If the range to select is unclear, it can be identified in practice by examining the molecular orbitals that describe the electron density along with the considered bond.
The relative position of the mass center of levels of atom A with respect to the mass center of levels of atom B is given as
where the contributions of the magnetic and spin quantum numbers are summed. According to this definition, the relative position of the A levels with respect to the B levels is
where, for simplicity, we may omit the dependence from the principal quantum number in the notation referring to
In this formalism, the greater the value of the higher the overlap of the selected atomic bands, and thus the electron density described by those orbitals gives a more covalent bond. The quantity is denoted as the covalency of the bond, which is specified in the same units of the energy .
Analogous effect in nuclear systems
An analogous effect to covalent binding is believed to occur in some nuclear systems, with the difference that the shared fermions are quarks rather than electrons. High energy proton-proton scattering cross-section indicates that quark interchange of either u or d quarks is the dominant process of the nuclear force at short distance. In particular, it dominates over the Yukawa interaction where a meson is exchanged. Therefore, covalent binding by quark interchange is expected to be the dominating mechanism of nuclear binding at small distance when the bound hadrons have covalence quarks in common.
See also
Bonding in solids
Bond order
Coordinate covalent bond, also known as a dipolar bond or a dative covalent bond
Covalent bond classification (or LXZ notation)
Covalent radius
Disulfide bond
Hybridization
Hydrogen bond
Ionic bond
Linear combination of atomic orbitals
Metallic bonding
Noncovalent bonding
Resonance (chemistry)
References
Sources
External links
Covalent Bonds and Molecular Structure
Structure and Bonding in Chemistry—Covalent Bonds
Chemical bonding | Covalent bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 3,287 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
6,247 | https://en.wikipedia.org/wiki/Condensation%20polymer | In polymer chemistry, condensation polymers are any kind of polymers whose process of polymerization involves a condensation reaction (i.e. a small molecule, such as water or methanol, is produced as a byproduct). Natural proteins as well as some common plastics such as nylon and PETE are formed in this way. Condensation polymers are formed by polycondensation, when the polymer is formed by condensation reactions between species of all degrees of polymerization, or by condensative chain polymerization, when the polymer is formed by sequential addition of monomers to an active site in a chain reaction. The main alternative forms of polymerization are chain polymerization and polyaddition, both of which give addition polymers.
Condensation polymerization is a form of step-growth polymerization. Linear polymers are produced from bifunctional monomers, i.e. compounds with two reactive end-groups. Common condensation polymers include polyesters, polyamides such as nylon, polyacetals, and proteins.
Polyamides
One important class of condensation polymers are polyamides. They arise from the reaction of carboxylic acid and an amine. Examples include nylons and proteins. When prepared from amino-carboxylic acids, e.g. amino acids, the stoichiometry of the polymerization includes co-formation of water:
n H2N-X-CO2H → [HN-X-C(O)]n + (n-1) H2O
When prepared from diamines and dicarboxylic acids, e.g. the production of nylon 66, the polymerization produces two molecules of water per repeat unit:
n H2N-X-NH2 + n HO2C-Y-CO2H → [HN-X-NHC(O)-Y-C(O)]n + (2n-1) H2O
Polyesters
Another important class of condensation polymers are polyesters. They arise from the reaction of a carboxylic acid and an alcohol. An example is polyethyleneterephthalate, the common plastic PETE (recycling #1 in the USA):
n HO-X-OH + n HO2C-Y-CO2H → [O-X-O2C-Y-C(O)]n + (2n-1) H2O
Safety and environmental considerations
Condensation polymers tend to be more biodegradable than addition polymers. The peptide or ester bonds between monomers can be hydrolysed, especially in the presence of catalysts or bacterial enzymes.
See also
Biopolymer
Epoxy resins
Polyamide
Polyester
References
External links
Polymers (and condensation polymers) - Virtual Text of Organic Chemistry, William Reusch
Polymer chemistry
Polymerization reactions | Condensation polymer | [
"Chemistry",
"Materials_science",
"Engineering"
] | 603 | [
"Polymerization reactions",
"Polymer chemistry",
"Materials science"
] |
6,249 | https://en.wikipedia.org/wiki/Timeline%20of%20computing | Timeline of computing presents events in the history of computing organized by year and grouped into six topic areas: predictions and concepts, first use and inventions, hardware systems and processors, operating systems, programming languages, and new application areas.
Detailed computing timelines: before 1950, 1950–1979, 1980–1989, 1990–1999, 2000–2009, 2010–2019, 2020–present
Graphical timeline
See also
History of compiler construction
History of computing hardware – up to third generation (1960s)
History of computing hardware (1960s–present) – third generation and later
History of the graphical user interface
History of the Internet
History of the World Wide Web
List of pioneers in computer science
Timeline of electrical and electronic engineering
Microprocessor chronology
Resources
Stephen White, A Brief History of Computing
The Computer History in time and space, Graphing Project, an attempt to build a graphical image of computer history, in particular operating systems.
External links
Visual History of Computing 1944-2013 (archived)
Digital Revolution | Timeline of computing | [
"Technology"
] | 196 | [
"Computers",
"History of computing",
"Digital Revolution"
] |
6,271 | https://en.wikipedia.org/wiki/Chemical%20reaction | A chemical reaction is a process that leads to the chemical transformation of one set of chemical substances to another. When chemical reactions occur, the atoms are rearranged and the reaction is accompanied by an energy change as new products are generated. Classically, chemical reactions encompass changes that only involve the positions of electrons in the forming and breaking of chemical bonds between atoms, with no change to the nuclei (no change to the elements present), and can often be described by a chemical equation. Nuclear chemistry is a sub-discipline of chemistry that involves the chemical reactions of unstable and radioactive elements where both electronic and nuclear changes can occur.
The substance (or substances) initially involved in a chemical reaction are called reactants or reagents. Chemical reactions are usually characterized by a chemical change, and they yield one or more products, which usually have properties different from the reactants. Reactions often consist of a sequence of individual sub-steps, the so-called elementary reactions, and the information on the precise course of action is part of the reaction mechanism. Chemical reactions are described with chemical equations, which symbolically present the starting materials, end products, and sometimes intermediate products and reaction conditions.
Chemical reactions happen at a characteristic reaction rate at a given temperature and chemical concentration. Some reactions produce heat and are called exothermic reactions, while others may require heat to enable the reaction to occur, which are called endothermic reactions. Typically, reaction rates increase with increasing temperature because there is more thermal energy available to reach the activation energy necessary for breaking bonds between atoms.
A reaction may be classified as redox in which oxidation and reduction occur or non-redox in which there is no oxidation and reduction occurring. Most simple redox reactions may be classified as a combination, decomposition, or single displacement reaction.
Different chemical reactions are used during chemical synthesis in order to obtain the desired product. In biochemistry, a consecutive series of chemical reactions (where the product of one reaction is the reactant of the next reaction) form metabolic pathways. These reactions are often catalyzed by protein enzymes. Enzymes increase the rates of biochemical reactions, so that metabolic syntheses and decompositions impossible under ordinary conditions can occur at the temperature and concentrations present within a cell.
The general concept of a chemical reaction has been extended to reactions between entities smaller than atoms, including nuclear reactions, radioactive decays and reactions between elementary particles, as described by quantum field theory.
History
Chemical reactions such as combustion in fire, fermentation and the reduction of ores to metals were known since antiquity. Initial theories of transformation of materials were developed by Greek philosophers, such as the Four-Element Theory of Empedocles stating that any substance is composed of the four basic elements – fire, water, air and earth. In the Middle Ages, chemical transformations were studied by alchemists. They attempted, in particular, to convert lead into gold, for which purpose they used reactions of lead and lead-copper alloys with sulfur.
The artificial production of chemical substances already was a central goal for medieval alchemists. Examples include the synthesis of ammonium chloride from organic substances as described in the works (c. 850–950) attributed to Jābir ibn Ḥayyān, or the production of mineral acids such as sulfuric and nitric acids by later alchemists, starting from c. 1300. The production of mineral acids involved the heating of sulfate and nitrate minerals such as copper sulfate, alum and saltpeter. In the 17th century, Johann Rudolph Glauber produced hydrochloric acid and sodium sulfate by reacting sulfuric acid and sodium chloride. With the development of the lead chamber process in 1746 and the Leblanc process, allowing large-scale production of sulfuric acid and sodium carbonate, respectively, chemical reactions became implemented into the industry. Further optimization of sulfuric acid technology resulted in the contact process in the 1880s, and the Haber process was developed in 1909–1910 for ammonia synthesis.
From the 16th century, researchers including Jan Baptist van Helmont, Robert Boyle, and Isaac Newton tried to establish theories of experimentally observed chemical transformations. The phlogiston theory was proposed in 1667 by Johann Joachim Becher. It postulated the existence of a fire-like element called "phlogiston", which was contained within combustible bodies and released during combustion. This proved to be false in 1785 by Antoine Lavoisier who found the correct explanation of the combustion as a reaction with oxygen from the air.
Joseph Louis Gay-Lussac recognized in 1808 that gases always react in a certain relationship with each other. Based on this idea and the atomic theory of John Dalton, Joseph Proust had developed the law of definite proportions, which later resulted in the concepts of stoichiometry and chemical equations.
Regarding the organic chemistry, it was long believed that compounds obtained from living organisms were too complex to be obtained synthetically. According to the concept of vitalism, organic matter was endowed with a "vital force" and distinguished from inorganic materials. This separation was ended however by the synthesis of urea from inorganic precursors by Friedrich Wöhler in 1828. Other chemists who brought major contributions to organic chemistry include Alexander William Williamson with his synthesis of ethers and Christopher Kelk Ingold, who, among many discoveries, established the mechanisms of substitution reactions.
Characteristics
The general characteristics of chemical reactions are:
Evolution of a gas
Formation of a precipitate
Change in temperature
Change in state
Equations
Chemical equations are used to graphically illustrate chemical reactions. They consist of chemical or structural formulas of the reactants on the left and those of the products on the right. They are separated by an arrow (→) which indicates the direction and type of the reaction; the arrow is read as the word "yields". The tip of the arrow points in the direction in which the reaction proceeds. A double arrow () pointing in opposite directions is used for equilibrium reactions. Equations should be balanced according to the stoichiometry, the number of atoms of each species should be the same on both sides of the equation. This is achieved by scaling the number of involved molecules (A, B, C and D in a schematic example below) by the appropriate integers a, b, c and d.
More elaborate reactions are represented by reaction schemes, which in addition to starting materials and products show important intermediates or transition states. Also, some relatively minor additions to the reaction can be indicated above the reaction arrow; examples of such additions are water, heat, illumination, a catalyst, etc. Similarly, some minor products can be placed below the arrow, often with a minus sign.
Retrosynthetic analysis can be applied to design a complex synthesis reaction. Here the analysis starts from the products, for example by splitting selected chemical bonds, to arrive at plausible initial reagents. A special arrow (⇒) is used in retro reactions.
Elementary reactions
The elementary reaction is the smallest division into which a chemical reaction can be decomposed, it has no intermediate products. Most experimentally observed reactions are built up from many elementary reactions that occur in parallel or sequentially. The actual sequence of the individual elementary reactions is known as reaction mechanism. An elementary reaction involves a few molecules, usually one or two, because of the low probability for several molecules to meet at a certain time.
The most important elementary reactions are unimolecular and bimolecular reactions. Only one molecule is involved in a unimolecular reaction; it is transformed by isomerization or a dissociation into one or more other molecules. Such reactions require the addition of energy in the form of heat or light. A typical example of a unimolecular reaction is the cis–trans isomerization, in which the cis-form of a compound converts to the trans-form or vice versa.
In a typical dissociation reaction, a bond in a molecule splits (ruptures) resulting in two molecular fragments. The splitting can be homolytic or heterolytic. In the first case, the bond is divided so that each product retains an electron and becomes a neutral radical. In the second case, both electrons of the chemical bond remain with one of the products, resulting in charged ions. Dissociation plays an important role in triggering chain reactions, such as hydrogen–oxygen or polymerization reactions.
AB -> A + B
Dissociation of a molecule AB into fragments A and B
For bimolecular reactions, two molecules collide and react with each other. Their merger is called chemical synthesis or an addition reaction.
A + B -> AB
Another possibility is that only a portion of one molecule is transferred to the other molecule. This type of reaction occurs, for example, in redox and acid-base reactions. In redox reactions, the transferred particle is an electron, whereas in acid-base reactions it is a proton. This type of reaction is also called metathesis.
HA + B -> A + HB
for example
NaCl + AgNO3 -> NaNO3 + AgCl(v)
Chemical equilibrium
Most chemical reactions are reversible; that is, they can and do run in both directions. The forward and reverse reactions are competing with each other and differ in reaction rates. These rates depend on the concentration and therefore change with the time of the reaction: the reverse rate gradually increases and becomes equal to the rate of the forward reaction, establishing the so-called chemical equilibrium. The time to reach equilibrium depends on parameters such as temperature, pressure, and the materials involved, and is determined by the minimum free energy. In equilibrium, the Gibbs free energy of reaction must be zero. The pressure dependence can be explained with the Le Chatelier's principle. For example, an increase in pressure due to decreasing volume causes the reaction to shift to the side with fewer moles of gas.
The reaction yield stabilizes at equilibrium but can be increased by removing the product from the reaction mixture or changed by increasing the temperature or pressure. A change in the concentrations of the reactants does not affect the equilibrium constant but does affect the equilibrium position.
Thermodynamics
Chemical reactions are determined by the laws of thermodynamics. Reactions can proceed by themselves if they are exergonic, that is if they release free energy. The associated free energy change of the reaction is composed of the changes of two different thermodynamic quantities, enthalpy and entropy:
.
: free energy, : enthalpy, : temperature, : entropy, : difference (change between original and product)
Reactions can be exothermic, where ΔH is negative and energy is released. Typical examples of exothermic reactions are combustion, precipitation and crystallization, in which ordered solids are formed from disordered gaseous or liquid phases. In contrast, in endothermic reactions, heat is consumed from the environment. This can occur by increasing the entropy of the system, often through the formation of gaseous or dissolved reaction products, which have higher entropy. Since the entropy term in the free-energy change increases with temperature, many endothermic reactions preferably take place at high temperatures. On the contrary, many exothermic reactions such as crystallization occur preferably at lower temperatures. A change in temperature can sometimes reverse the sign of the enthalpy of a reaction, as for the carbon monoxide reduction of molybdenum dioxide:
2CO(g) + MoO2(s) -> 2CO2(g) + Mo(s);
This reaction to form carbon dioxide and molybdenum is endothermic at low temperatures, becoming less so with increasing temperature. ΔH° is zero at , and the reaction becomes exothermic above that temperature.
Changes in temperature can also reverse the direction tendency of a reaction. For example, the water gas shift reaction
CO(g) + H2O({v}) <=> CO2(g) + H2(g)
is favored by low temperatures, but its reverse is favored by high temperatures. The shift in reaction direction tendency occurs at .
Reactions can also be characterized by their internal energy change, which takes into account changes in the entropy, volume and chemical potentials. The latter depends, among other things, on the activities of the involved substances.
: internal energy, : entropy, : pressure, : chemical potential, : number of molecules, : small change sign
Kinetics
The speed at which reactions take place is studied by reaction kinetics. The rate depends on various parameters, such as:
Reactant concentrations, which usually make the reaction happen at a faster rate if raised through increased collisions per unit of time. Some reactions, however, have rates that are independent of reactant concentrations, due to a limited number of catalytic sites. These are called zero order reactions.
Surface area available for contact between the reactants, in particular solid ones in heterogeneous systems. Larger surface areas lead to higher reaction rates.
Pressure – increasing the pressure decreases the volume between molecules and therefore increases the frequency of collisions between the molecules.
Activation energy, which is defined as the amount of energy required to make the reaction start and carry on spontaneously. Higher activation energy implies that the reactants need more energy to start than a reaction with lower activation energy.
Temperature, which hastens reactions if raised, since higher temperature increases the energy of the molecules, creating more collisions per unit of time,
The presence or absence of a catalyst. Catalysts are substances that make weak bonds with reactants or intermediates and change the pathway (mechanism) of a reaction which in turn increases the speed of a reaction by lowering the activation energy needed for the reaction to take place. A catalyst is not destroyed or changed during a reaction, so it can be used again.
For some reactions, the presence of electromagnetic radiation, most notably ultraviolet light, is needed to promote the breaking of bonds to start the reaction. This is particularly true for reactions involving radicals.
Several theories allow calculating the reaction rates at the molecular level. This field is referred to as reaction dynamics. The rate v of a first-order reaction, which could be the disintegration of a substance A, is given by:
Its integration yields:
Here k is the first-order rate constant, having dimension 1/time, [A](t) is the concentration at a time t and [A]0 is the initial concentration. The rate of a first-order reaction depends only on the concentration and the properties of the involved substance, and the reaction itself can be described with a characteristic half-life. More than one time constant is needed when describing reactions of higher order. The temperature dependence of the rate constant usually follows the Arrhenius equation:
where Ea is the activation energy and kB is the Boltzmann constant. One of the simplest models of reaction rate is the collision theory. More realistic models are tailored to a specific problem and include the transition state theory, the calculation of the potential energy surface, the Marcus theory and the Rice–Ramsperger–Kassel–Marcus (RRKM) theory.
Reaction types
Four basic types
Synthesis
In a synthesis reaction, two or more simple substances combine to form a more complex substance. These reactions are in the general form:
A + B->AB
Two or more reactants yielding one product is another way to identify a synthesis reaction. One example of a synthesis reaction is the combination of iron and sulfur to form iron(II) sulfide:
8Fe + S8->8FeS
Another example is simple hydrogen gas combined with simple oxygen gas to produce a more complex substance, such as water.
Decomposition
A decomposition reaction is when a more complex substance breaks down into its more simple parts. It is thus the opposite of a synthesis reaction and can be written as
AB->A + B
One example of a decomposition reaction is the electrolysis of water to make oxygen and hydrogen gas:
2H2O->2H2 + O2
Single displacement
In a single displacement reaction, a single uncombined element replaces another in a compound; in other words, one element trades places with another element in a compound These reactions come in the general form of:
A + BC->AC + B
One example of a single displacement reaction is when magnesium replaces hydrogen in water to make solid magnesium hydroxide and hydrogen gas:
Mg + 2H2O->Mg(OH)2 (v) + H2 (^)
Double displacement
In a double displacement reaction, the anions and cations of two compounds switch places and form two entirely different compounds. These reactions are in the general form:
AB + CD->AD + CB
For example, when barium chloride (BaCl2) and magnesium sulfate (MgSO4) react, the SO42− anion switches places with the 2Cl− anion, giving the compounds BaSO4 and MgCl2.
Another example of a double displacement reaction is the reaction of lead(II) nitrate with potassium iodide to form lead(II) iodide and potassium nitrate:
Pb(NO3)2 + 2KI->PbI2(v) + 2KNO3
Forward and backward reactions
According to Le Chatelier's Principle, reactions may proceed in the forward or reverse direction until they end or reach equilibrium.
Forward reactions
Reactions that proceed in the forward direction (from left to right) to approach equilibrium are often called spontaneous reactions, that is, is negative, which means that if they occur at constant temperature and pressure, they decrease the Gibbs free energy of the reaction. They require less energy to proceed in the forward direction. Reactions are usually written as forward reactions in the direction in which they are spontaneous. Examples:
Reaction of hydrogen and oxygen to form water.
+
Dissociation of acetic acid in water into acetate ions and hydronium ions.
+ +
Backward reactions
Reactions that proceed in the backward direction to approach equilibrium are often called non-spontaneous reactions, that is, is positive, which means that if they occur at constant temperature and pressure, they increase the Gibbs free energy of the reaction. They require input of energy to proceed in the forward direction. Examples include:
Charging a normal DC battery (consisting of electrolytic cells) from an external electrical power source
Photosynthesis driven by absorption of electromagnetic radiation usually in the form of sunlight
+ + → +
Combustion
In a combustion reaction, an element or compound reacts with an oxidant, usually oxygen, often producing energy in the form of heat or light. Combustion reactions frequently involve a hydrocarbon. For instance, the combustion of 1 mole (114 g) of octane in oxygen
C8H18(l) + 25/2 O2(g)->8CO2 + 9H2O(l)
releases 5500 kJ. A combustion reaction can also result from carbon, magnesium or sulfur reacting with oxygen.
2Mg(s) + O2->2MgO(s)
S(s) + O2(g)->SO2(g)
Oxidation and reduction
Redox reactions can be understood in terms of the transfer of electrons from one involved species (reducing agent) to another (oxidizing agent). In this process, the former species is oxidized and the latter is reduced. Though sufficient for many purposes, these descriptions are not precisely correct. Oxidation is better defined as an increase in oxidation state of atoms and reduction as a decrease in oxidation state. In practice, the transfer of electrons will always change the oxidation state, but there are many reactions that are classed as "redox" even though no electron transfer occurs (such as those involving covalent bonds).
In the following redox reaction, hazardous sodium metal reacts with toxic chlorine gas to form the ionic compound sodium chloride, or common table salt:
2Na(s) + Cl2(g)->2NaCl(s)
In the reaction, sodium metal goes from an oxidation state of 0 (a pure element) to +1: in other words, the sodium lost one electron and is said to have been oxidized. On the other hand, the chlorine gas goes from an oxidation of 0 (also a pure element) to −1: the chlorine gains one electron and is said to have been reduced. Because the chlorine is the one reduced, it is considered the electron acceptor, or in other words, induces oxidation in the sodium – thus the chlorine gas is considered the oxidizing agent. Conversely, the sodium is oxidized or is the electron donor, and thus induces a reduction in the other species and is considered the reducing agent.
Which of the involved reactants would be a reducing or oxidizing agent can be predicted from the electronegativity of their elements. Elements with low electronegativities, such as most metals, easily donate electrons and oxidize – they are reducing agents. On the contrary, many oxides or ions with high oxidation numbers of their non-oxygen atoms, such as , , , , or , can gain one or two extra electrons and are strong oxidizing agents.
For some main-group elements the number of electrons donated or accepted in a redox reaction can be predicted from the electron configuration of the reactant element. Elements try to reach the low-energy noble gas configuration, and therefore alkali metals and halogens will donate and accept one electron, respectively. Noble gases themselves are chemically inactive.
The overall redox reaction can be balanced by combining the oxidation and reduction half-reactions multiplied by coefficients such that the number of electrons lost in the oxidation equals the number of electrons gained in the reduction.
An important class of redox reactions are the electrolytic electrochemical reactions, where electrons from the power supply at the negative electrode are used as the reducing agent and electron withdrawal at the positive electrode as the oxidizing agent. These reactions are particularly important for the production of chemical elements, such as chlorine or aluminium. The reverse process, in which electrons are released in redox reactions and chemical energy is converted to electrical energy, is possible and used in batteries.
Complexation
In complexation reactions, several ligands react with a metal atom to form a coordination complex. This is achieved by providing lone pairs of the ligand into empty orbitals of the metal atom and forming dipolar bonds. The ligands are Lewis bases, they can be both ions and neutral molecules, such as carbon monoxide, ammonia or water. The number of ligands that react with a central metal atom can be found using the 18-electron rule, saying that the valence shells of a transition metal will collectively accommodate 18 electrons, whereas the symmetry of the resulting complex can be predicted with the crystal field theory and ligand field theory. Complexation reactions also include ligand exchange, in which one or more ligands are replaced by another, and redox processes which change the oxidation state of the central metal atom.
Acid–base reactions
In the Brønsted–Lowry acid–base theory, an acid–base reaction involves a transfer of protons (H+) from one species (the acid) to another (the base). When a proton is removed from an acid, the resulting species is termed that acid's conjugate base. When the proton is accepted by a base, the resulting species is termed that base's conjugate acid. In other words, acids act as proton donors and bases act as proton acceptors according to the following equation:
\underset{acid}{HA} + \underset{base}{B} <=> \underset{conjugated\ base}{A^-} + \underset{conjugated\ acid}{HB+}
The reverse reaction is possible, and thus the acid/base and conjugated base/acid are always in equilibrium. The equilibrium is determined by the acid and base dissociation constants (Ka and Kb) of the involved substances. A special case of the acid-base reaction is the neutralization where an acid and a base, taken at the exact same amounts, form a neutral salt.
Acid-base reactions can have different definitions depending on the acid-base concept employed. Some of the most common are:
Arrhenius definition: Acids dissociate in water releasing H3O+ ions; bases dissociate in water releasing OH− ions.
Brønsted–Lowry definition: Acids are proton (H+) donors, bases are proton acceptors; this includes the Arrhenius definition.
Lewis definition: Acids are electron-pair acceptors, and bases are electron-pair donors; this includes the Brønsted-Lowry definition.
Precipitation
Precipitation is the formation of a solid in a solution or inside another solid during a chemical reaction. It usually takes place when the concentration of dissolved ions exceeds the solubility limit and forms an insoluble salt. This process can be assisted by adding a precipitating agent or by the removal of the solvent. Rapid precipitation results in an amorphous or microcrystalline residue and a slow process can yield single crystals. The latter can also be obtained by recrystallization from microcrystalline salts.
Solid-state reactions
Reactions can take place between two solids. However, because of the relatively small diffusion rates in solids, the corresponding chemical reactions are very slow in comparison to liquid and gas phase reactions. They are accelerated by increasing the reaction temperature and finely dividing the reactant to increase the contacting surface area.
Reactions at the solid/gas interface
The reaction can take place at the solid|gas interface, surfaces at very low pressure such as ultra-high vacuum. Via scanning tunneling microscopy, it is possible to observe reactions at the solid|gas interface in real space, if the time scale of the reaction is in the correct range. Reactions at the solid|gas interface are in some cases related to catalysis.
Photochemical reactions
In photochemical reactions, atoms and molecules absorb energy (photons) of the illumination light and convert it into an excited state. They can then release this energy by breaking chemical bonds, thereby producing radicals. Photochemical reactions include hydrogen–oxygen reactions, radical polymerization, chain reactions and rearrangement reactions.
Many important processes involve photochemistry. The premier example is photosynthesis, in which most plants use solar energy to convert carbon dioxide and water into glucose, disposing of oxygen as a side-product. Humans rely on photochemistry for the formation of vitamin D, and vision is initiated by a photochemical reaction of rhodopsin. In fireflies, an enzyme in the abdomen catalyzes a reaction that results in bioluminescence. Many significant photochemical reactions, such as ozone formation, occur in the Earth atmosphere and constitute atmospheric chemistry.
Catalysis
In catalysis, the reaction does not proceed directly, but through a reaction with a third substance known as catalyst. Although the catalyst takes part in the reaction, forming weak bonds with reactants or intermediates, it is returned to its original state by the end of the reaction and so is not consumed. However, it can be inhibited, deactivated or destroyed by secondary processes. Catalysts can be used in a different phase (heterogeneous) or in the same phase (homogeneous) as the reactants. In heterogeneous catalysis, typical secondary processes include coking where the catalyst becomes covered by polymeric side products. Additionally, heterogeneous catalysts can dissolve into the solution in a solid-liquid system or evaporate in a solid–gas system. Catalysts can only speed up the reaction – chemicals that slow down the reaction are called inhibitors. Substances that increase the activity of catalysts are called promoters, and substances that deactivate catalysts are called catalytic poisons. With a catalyst, a reaction that is kinetically inhibited by high activation energy can take place in the circumvention of this activation energy.
Heterogeneous catalysts are usually solids, powdered in order to maximize their surface area. Of particular importance in heterogeneous catalysis are the platinum group metals and other transition metals, which are used in hydrogenations, catalytic reforming and in the synthesis of commodity chemicals such as nitric acid and ammonia. Acids are an example of a homogeneous catalyst, they increase the nucleophilicity of carbonyls, allowing a reaction that would not otherwise proceed with electrophiles. The advantage of homogeneous catalysts is the ease of mixing them with the reactants, but they may also be difficult to separate from the products. Therefore, heterogeneous catalysts are preferred in many industrial processes.
Reactions in organic chemistry
In organic chemistry, in addition to oxidation, reduction or acid-base reactions, a number of other reactions can take place which involves covalent bonds between carbon atoms or carbon and heteroatoms (such as oxygen, nitrogen, halogens, etc.). Many specific reactions in organic chemistry are name reactions designated after their discoverers.
One of the most industrially important reactions is the cracking of heavy hydrocarbons at oil refineries to create smaller, simpler molecules. This process is used to manufacture gasoline. Specific types of organic reactions may be grouped by their reaction mechanisms (particularly substitution, addition and elimination) or by the types of products they produce (for example, methylation, polymerisation and halogenation).
Substitution
In a substitution reaction, a functional group in a particular chemical compound is replaced by another group. These reactions can be distinguished by the type of substituting species into a nucleophilic, electrophilic or radical substitution.
In the first type, a nucleophile, an atom or molecule with an excess of electrons and thus a negative charge or partial charge, replaces another atom or part of the "substrate" molecule. The electron pair from the nucleophile attacks the substrate forming a new bond, while the leaving group departs with an electron pair. The nucleophile may be electrically neutral or negatively charged, whereas the substrate is typically neutral or positively charged. Examples of nucleophiles are hydroxide ion, alkoxides, amines and halides. This type of reaction is found mainly in aliphatic hydrocarbons, and rarely in aromatic hydrocarbon. The latter have high electron density and enter nucleophilic aromatic substitution only with very strong electron withdrawing groups. Nucleophilic substitution can take place by two different mechanisms, SN1 and SN2. In their names, S stands for substitution, N for nucleophilic, and the number represents the kinetic order of the reaction, unimolecular or bimolecular.
The SN1 reaction proceeds in two steps. First, the leaving group is eliminated creating a carbocation. This is followed by a rapid reaction with the nucleophile.
In the SN2 mechanisms, the nucleophile forms a transition state with the attacked molecule, and only then the leaving group is cleaved. These two mechanisms differ in the stereochemistry of the products. SN1 leads to the non-stereospecific addition and does not result in a chiral center, but rather in a set of geometric isomers (cis/trans). In contrast, a reversal (Walden inversion) of the previously existing stereochemistry is observed in the SN2 mechanism.
Electrophilic substitution is the counterpart of the nucleophilic substitution in that the attacking atom or molecule, an electrophile, has low electron density and thus a positive charge. Typical electrophiles are the carbon atom of carbonyl groups, carbocations or sulfur or nitronium cations. This reaction takes place almost exclusively in aromatic hydrocarbons, where it is called electrophilic aromatic substitution. The electrophile attack results in the so-called σ-complex, a transition state in which the aromatic system is abolished. Then, the leaving group, usually a proton, is split off and the aromaticity is restored. An alternative to aromatic substitution is electrophilic aliphatic substitution. It is similar to the nucleophilic aliphatic substitution and also has two major types, SE1 and SE2
In the third type of substitution reaction, radical substitution, the attacking particle is a radical. This process usually takes the form of a chain reaction, for example in the reaction of alkanes with halogens. In the first step, light or heat disintegrates the halogen-containing molecules producing radicals. Then the reaction proceeds as an avalanche until two radicals meet and recombine.
X. + R-H -> X-H + R.
R. + X2 -> R-X + X.
Reactions during the chain reaction of radical substitution
Addition and elimination
The addition and its counterpart, the elimination, are reactions that change the number of substituents on the carbon atom, and form or cleave multiple bonds. Double and triple bonds can be produced by eliminating a suitable leaving group. Similar to the nucleophilic substitution, there are several possible reaction mechanisms that are named after the respective reaction order. In the E1 mechanism, the leaving group is ejected first, forming a carbocation. The next step, the formation of the double bond, takes place with the elimination of a proton (deprotonation). The leaving order is reversed in the E1cb mechanism, that is the proton is split off first. This mechanism requires the participation of a base. Because of the similar conditions, both reactions in the E1 or E1cb elimination always compete with the SN1 substitution.
The E2 mechanism also requires a base, but there the attack of the base and the elimination of the leaving group proceed simultaneously and produce no ionic intermediate. In contrast to the E1 eliminations, different stereochemical configurations are possible for the reaction product in the E2 mechanism, because the attack of the base preferentially occurs in the anti-position with respect to the leaving group. Because of the similar conditions and reagents, the E2 elimination is always in competition with the SN2-substitution.
The counterpart of elimination is an addition where double or triple bonds are converted into single bonds. Similar to substitution reactions, there are several types of additions distinguished by the type of the attacking particle. For example, in the electrophilic addition of hydrogen bromide, an electrophile (proton) attacks the double bond forming a carbocation, which then reacts with the nucleophile (bromine). The carbocation can be formed on either side of the double bond depending on the groups attached to its ends, and the preferred configuration can be predicted with the Markovnikov's rule. This rule states that "In the heterolytic addition of a polar molecule to an alkene or alkyne, the more electronegative (nucleophilic) atom (or part) of the polar molecule becomes attached to the carbon atom bearing the smaller number of hydrogen atoms."
If the addition of a functional group takes place at the less substituted carbon atom of the double bond, then the electrophilic substitution with acids is not possible. In this case, one has to use the hydroboration–oxidation reaction, wherein the first step, the boron atom acts as electrophile and adds to the less substituted carbon atom. In the second step, the nucleophilic hydroperoxide or halogen anion attacks the boron atom.
While the addition to the electron-rich alkenes and alkynes is mainly electrophilic, the nucleophilic addition plays an important role in the carbon-heteroatom multiple bonds, and especially its most important representative, the carbonyl group. This process is often associated with elimination so that after the reaction the carbonyl group is present again. It is, therefore, called an addition-elimination reaction and may occur in carboxylic acid derivatives such as chlorides, esters or anhydrides. This reaction is often catalyzed by acids or bases, where the acids increase the electrophilicity of the carbonyl group by binding to the oxygen atom, whereas the bases enhance the nucleophilicity of the attacking nucleophile.
Nucleophilic addition of a carbanion or another nucleophile to the double bond of an alpha, beta-unsaturated carbonyl compound can proceed via the Michael reaction, which belongs to the larger class of conjugate additions. This is one of the most useful methods for the mild formation of C–C bonds.
Some additions which can not be executed with nucleophiles and electrophiles can be succeeded with free radicals. As with the free-radical substitution, the radical addition proceeds as a chain reaction, and such reactions are the basis of the free-radical polymerization.
Other organic reaction mechanisms
In a rearrangement reaction, the carbon skeleton of a molecule is rearranged to give a structural isomer of the original molecule. These include hydride shift reactions such as the Wagner-Meerwein rearrangement, where a hydrogen, alkyl or aryl group migrates from one carbon to a neighboring carbon. Most rearrangements are associated with the breaking and formation of new carbon-carbon bonds. Other examples are sigmatropic reaction such as the Cope rearrangement.
Cyclic rearrangements include cycloadditions and, more generally, pericyclic reactions, wherein two or more double bond-containing molecules form a cyclic molecule. An important example of cycloaddition reaction is the Diels–Alder reaction (the so-called [4+2] cycloaddition) between a conjugated diene and a substituted alkene to form a substituted cyclohexene system.
Whether a certain cycloaddition would proceed depends on the electronic orbitals of the participating species, as only orbitals with the same sign of wave function will overlap and interact constructively to form new bonds. Cycloaddition is usually assisted by light or heat. These perturbations result in a different arrangement of electrons in the excited state of the involved molecules and therefore in different effects. For example, the [4+2] Diels-Alder reactions can be assisted by heat whereas the [2+2] cycloaddition is selectively induced by light. Because of the orbital character, the potential for developing stereoisomeric products upon cycloaddition is limited, as described by the Woodward–Hoffmann rules.
Biochemical reactions
Biochemical reactions are mainly controlled by complex proteins called enzymes, which are usually specialized to catalyze only a single, specific reaction. The reaction takes place in the active site, a small part of the enzyme which is usually found in a cleft or pocket lined by amino acid residues, and the rest of the enzyme is used mainly for stabilization. The catalytic action of enzymes relies on several mechanisms including the molecular shape ("induced fit"), bond strain, proximity and orientation of molecules relative to the enzyme, proton donation or withdrawal (acid/base catalysis), electrostatic interactions and many others.
The biochemical reactions that occur in living organisms are collectively known as metabolism. Among the most important of its mechanisms is the anabolism, in which different DNA and enzyme-controlled processes result in the production of large molecules such as proteins and carbohydrates from smaller units. Bioenergetics studies the sources of energy for such reactions. Important energy sources are glucose and oxygen, which can be produced by plants via photosynthesis or assimilated from food and air, respectively. All organisms use this energy to produce adenosine triphosphate (ATP), which can then be used to energize other reactions. Decomposition of organic material by fungi, bacteria and other micro-organisms is also within the scope of biochemistry.
Applications
Chemical reactions are central to chemical engineering, where they are used for the synthesis of new compounds from natural raw materials such as petroleum, mineral ores, and oxygen in air. It is essential to make the reaction as efficient as possible, maximizing the yield and minimizing the number of reagents, energy inputs and waste. Catalysts are especially helpful for reducing the energy required for the reaction and increasing its reaction rate.
Some specific reactions have their niche applications. For example, the thermite reaction is used to generate light and heat in pyrotechnics and welding. Although it is less controllable than the more conventional oxy-fuel welding, arc welding and flash welding, it requires much less equipment and is still used to mend rails, especially in remote areas.
Monitoring
Mechanisms of monitoring chemical reactions depend strongly on the reaction rate. Relatively slow processes can be analyzed in situ for the concentrations and identities of the individual ingredients. Important tools of real-time analysis are the measurement of pH and analysis of optical absorption (color) and emission spectra. A less accessible but rather efficient method is the introduction of a radioactive isotope into the reaction and monitoring how it changes over time and where it moves to; this method is often used to analyze the redistribution of substances in the human body. Faster reactions are usually studied with ultrafast laser spectroscopy where utilization of femtosecond lasers allows short-lived transition states to be monitored at a time scaled down to a few femtoseconds.
See also
Chemical equation
Chemical reaction
Substrate
Reagent
Catalyst
Product
Chemical reaction model
Chemist
Chemistry
Combustion
Limiting reagent
List of organic reactions
Mass balance
Microscopic reversibility
Organic reaction
Reaction progress kinetic analysis
Reversible reaction
References
Bibliography
Chemistry
Change | Chemical reaction | [
"Chemistry"
] | 8,540 | [
"nan"
] |
6,295 | https://en.wikipedia.org/wiki/Chaos%20theory | Chaos theory (or chaology) is an interdisciplinary area of scientific study and branch of mathematics. It focuses on underlying patterns and deterministic laws of dynamical systems that are highly sensitive to initial conditions. These were once thought to have completely random states of disorder and irregularities. Chaos theory states that within the apparent randomness of chaotic complex systems, there are underlying patterns, interconnection, constant feedback loops, repetition, self-similarity, fractals and self-organization. The butterfly effect, an underlying principle of chaos, describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state (meaning there is sensitive dependence on initial conditions). A metaphor for this behavior is that a butterfly flapping its wings in Brazil can cause a tornado in Texas.
Small differences in initial conditions, such as those due to errors in measurements or due to rounding errors in numerical computation, can yield widely diverging outcomes for such dynamical systems, rendering long-term prediction of their behavior impossible in general. This can happen even though these systems are deterministic, meaning that their future behavior follows a unique evolution and is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable. This behavior is known as deterministic chaos, or simply chaos. The theory was summarized by Edward Lorenz as:
Chaotic behavior exists in many natural systems, including fluid flow, heartbeat irregularities, weather and climate. It also occurs spontaneously in some systems with artificial components, such as road traffic. This behavior can be studied through the analysis of a chaotic mathematical model or through analytical techniques such as recurrence plots and Poincaré maps. Chaos theory has applications in a variety of disciplines, including meteorology, anthropology, sociology, environmental science, computer science, engineering, economics, ecology, and pandemic crisis management. The theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory and self-assembly processes.
Introduction
Chaos theory concerns deterministic systems whose behavior can, in principle, be predicted. Chaotic systems are predictable for a while and then 'appear' to become random. The amount of time for which the behavior of a chaotic system can be effectively predicted depends on three things: how much uncertainty can be tolerated in the forecast, how accurately its current state can be measured, and a time scale depending on the dynamics of the system, called the Lyapunov time. Some examples of Lyapunov times are: chaotic electrical circuits, about 1 millisecond; weather systems, a few days (unproven); the inner solar system, 4 to 5 million years. In chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast. This means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random.
Chaotic dynamics
In common usage, "chaos" means "a state of disorder". However, in chaos theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition, originally formulated by Robert L. Devaney, says that to classify a dynamical system as chaotic, it must have these properties:
it must be sensitive to initial conditions,
it must be topologically transitive,
it must have dense periodic orbits.
In some cases, the last two properties above have been shown to actually imply sensitivity to initial conditions. In the discrete-time case, this is true for all continuous maps on metric spaces. In these cases, while it is often the most practically significant property, "sensitivity to initial conditions" need not be stated in the definition.
If attention is restricted to intervals, the second property implies the other two. An alternative and a generally weaker definition of chaos uses only the first two properties in the above list.
Sensitivity to initial conditions
Sensitivity to initial conditions means that each point in a chaotic system is arbitrarily closely approximated by other points that have significantly different future paths or trajectories. Thus, an arbitrarily small change or perturbation of the current trajectory may lead to significantly different future behavior.
Sensitivity to initial conditions is popularly known as the "butterfly effect", so-called because of the title of a paper given by Edward Lorenz in 1972 to the American Association for the Advancement of Science in Washington, D.C., entitled Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?. The flapping wing represents a small change in the initial condition of the system, which causes a chain of events that prevents the predictability of large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the overall system could have been vastly different.
As suggested in Lorenz's book entitled The Essence of Chaos, published in 1993, "sensitive dependence can serve as an acceptable definition of chaos". In the same book, Lorenz defined the butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration." The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC). An idealized skiing model was developed to illustrate the sensitivity of time-varying paths to initial positions. A predictability horizon can be determined before the onset of SDIC (i.e., prior to significant separations of initial nearby trajectories).
A consequence of sensitivity to initial conditions is that if we start with a limited amount of information about the system (as is usually the case in practice), then beyond a certain time, the system would no longer be predictable. This is most prevalent in the case of weather, which is generally predictable only about a week ahead. This does not mean that one cannot assert anything about events far in the future—only that some restrictions on the system are present. For example, we know that the temperature of the surface of the earth will not naturally reach or fall below on earth (during the current geologic era), but we cannot predict exactly which day will have the hottest temperature of the year.
In more mathematical terms, the Lyapunov exponent measures the sensitivity to initial conditions, in the form of rate of exponential divergence from the perturbed initial conditions. More specifically, given two starting trajectories in the phase space that are infinitesimally close, with initial separation , the two trajectories end up diverging at a rate given by
where is the time and is the Lyapunov exponent. The rate of separation depends on the orientation of the initial separation vector, so a whole spectrum of Lyapunov exponents can exist. The number of Lyapunov exponents is equal to the number of dimensions of the phase space, though it is common to just refer to the largest one. For example, the maximal Lyapunov exponent (MLE) is most often used, because it determines the overall predictability of the system. A positive MLE is usually taken as an indication that the system is chaotic.
In addition to the above property, other properties related to sensitivity of initial conditions also exist. These include, for example, measure-theoretical mixing (as discussed in ergodic theory) and properties of a K-system.
Non-periodicity
A chaotic system may have sequences of values for the evolving variable that exactly repeat themselves, giving periodic behavior starting from any point in that sequence. However, such periodic sequences are repelling rather than attracting, meaning that if the evolving variable is outside the sequence, however close, it will not enter the sequence and in fact, will diverge from it. Thus for almost all initial conditions, the variable evolves chaotically with non-periodic behavior.
Topological mixing
Topological mixing (or the weaker condition of topological transitivity) means that the system evolves over time so that any given region or open set of its phase space eventually overlaps with any other given region. This mathematical concept of "mixing" corresponds to the standard intuition, and the mixing of colored dyes or fluids is an example of a chaotic system.
Topological mixing is often omitted from popular accounts of chaos, which equate chaos with only sensitivity to initial conditions. However, sensitive dependence on initial conditions alone does not give chaos. For example, consider the simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial conditions everywhere, since any pair of nearby points eventually becomes widely separated. However, this example has no topological mixing, and therefore has no chaos. Indeed, it has extremely simple behavior: all points except 0 tend to positive or negative infinity.
Topological transitivity
A map is said to be topologically transitive if for any pair of non-empty open sets , there exists such that . Topological transitivity is a weaker version of topological mixing. Intuitively, if a map is topologically transitive then given a point x and a region V, there exists a point y near x whose orbit passes through V. This implies that it is impossible to decompose the system into two open sets.
An important related theorem is the Birkhoff Transitivity Theorem. It is easy to see that the existence of a dense orbit implies topological transitivity. The Birkhoff Transitivity Theorem states that if X is a second countable, complete metric space, then topological transitivity implies the existence of a dense set of points in X that have dense orbits.
Density of periodic orbits
For a chaotic system to have dense periodic orbits means that every point in the space is approached arbitrarily closely by periodic orbits. The one-dimensional logistic map defined by x → 4 x (1 – x) is one of the simplest systems with density of periodic orbits. For example, → → (or approximately 0.3454915 → 0.9045085 → 0.3454915) is an (unstable) orbit of period 2, and similar orbits exist for periods 4, 8, 16, etc. (indeed, for all the periods specified by Sharkovskii's theorem).
Sharkovskii's theorem is the basis of the Li and Yorke (1975) proof that any continuous one-dimensional system that exhibits a regular cycle of period three will also display regular cycles of every other length, as well as completely chaotic orbits.
Strange attractors
Some dynamical systems, like the one-dimensional logistic map defined by x → 4 x (1 – x), are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on an attractor, since then a large set of initial conditions leads to orbits that converge to this chaotic region.
An easy way to visualize a chaotic attractor is to start with a point in the basin of attraction of the attractor, and then simply plot its subsequent orbit. Because of the topological transitivity condition, this is likely to produce a picture of the entire final attractor, and indeed both orbits shown in the figure on the right give a picture of the general shape of the Lorenz attractor. This attractor results from a simple three-dimensional model of the Lorenz weather system. The Lorenz attractor is perhaps one of the best-known chaotic system diagrams, probably because it is not only one of the first, but it is also one of the most complex, and as such gives rise to a very interesting pattern that, with a little imagination, looks like the wings of a butterfly.
Unlike fixed-point attractors and limit cycles, the attractors that arise from chaotic systems, known as strange attractors, have great detail and complexity. Strange attractors occur in both continuous dynamical systems (such as the Lorenz system) and in some discrete systems (such as the Hénon map). Other discrete dynamical systems have a repelling structure called a Julia set, which forms at the boundary between basins of attraction of fixed points. Julia sets can be thought of as strange repellers. Both strange attractors and Julia sets typically have a fractal structure, and the fractal dimension can be calculated for them.
Coexisting attractors
In contrast to single type chaotic solutions, recent studies using Lorenz models have emphasized the importance of considering various types of solutions. For example, coexisting chaotic and non-chaotic may appear within the same model (e.g., the double pendulum system) using the same modeling configurations but different initial conditions. The findings of attractor coexistence, obtained from classical and generalized Lorenz models, suggested a revised view that "the entirety of weather possesses a dual nature of chaos and order with distinct predictability", in contrast to the conventional view of "weather is chaotic".
Minimum complexity of a chaotic system
Discrete chaotic systems, such as the logistic map, can exhibit strange attractors whatever their dimensionality. In contrast, for continuous dynamical systems, the Poincaré–Bendixson theorem shows that a strange attractor can only arise in three or more dimensions. Finite-dimensional linear systems are never chaotic; for a dynamical system to display chaotic behavior, it must be either nonlinear or infinite-dimensional.
The Poincaré–Bendixson theorem states that a two-dimensional differential equation has very regular behavior. The Lorenz attractor discussed below is generated by a system of three differential equations such as:
where , , and make up the system state, is time, and , , are the system parameters. Five of the terms on the right hand side are linear, while two are quadratic; a total of seven terms. Another well-known chaotic attractor is generated by the Rössler equations, which have only one nonlinear term out of seven. Sprott found a three-dimensional system with just five terms, that had only one nonlinear term, which exhibits chaos for certain parameter values. Zhang and Heidel showed that, at least for dissipative and conservative quadratic systems, three-dimensional quadratic systems with only three or four terms on the right-hand side cannot exhibit chaotic behavior. The reason is, simply put, that solutions to such systems are asymptotic to a two-dimensional surface and therefore solutions are well behaved.
While the Poincaré–Bendixson theorem shows that a continuous dynamical system on the Euclidean plane cannot be chaotic, two-dimensional continuous systems with non-Euclidean geometry can still exhibit some chaotic properties. Perhaps surprisingly, chaos may occur also in linear systems, provided they are infinite dimensional. A theory of linear chaos is being developed in a branch of mathematical analysis known as functional analysis.
The above set of three ordinary differential equations has been referred to as the three-dimensional Lorenz model. Since 1963, higher-dimensional Lorenz models have been developed in numerous studies for examining the impact of an increased degree of nonlinearity, as well as its collective effect with heating and dissipations, on solution stability.
Infinite dimensional maps
The straightforward generalization of coupled discrete maps is based upon convolution integral which mediates interaction between spatially distributed maps:
,
where kernel is propagator derived as Green function of a relevant physical system,
might be logistic map alike or complex map. For examples of complex maps the Julia set or Ikeda map
may serve. When wave propagation problems at distance with wavelength are considered the kernel may have a form of Green function for Schrödinger equation:.
.
Jerk systems
In physics, jerk is the third derivative of position, with respect to time. As such, differential equations of the form
are sometimes called jerk equations. It has been shown that a jerk equation, which is equivalent to a system of three first order, ordinary, non-linear differential equations, is in a certain sense the minimal setting for solutions showing chaotic behavior. This motivates mathematical interest in jerk systems. Systems involving a fourth or higher derivative are called accordingly hyperjerk systems.
A jerk system's behavior is described by a jerk equation, and for certain jerk equations, simple electronic circuits can model solutions. These circuits are known as jerk circuits.
One of the most interesting properties of jerk circuits is the possibility of chaotic behavior. In fact, certain well-known chaotic systems, such as the Lorenz attractor and the Rössler map, are conventionally described as a system of three first-order differential equations that can combine into a single (although rather complicated) jerk equation. Another example of a jerk equation with nonlinearity in the magnitude of is:
Here, A is an adjustable parameter. This equation has a chaotic solution for A=3/5 and can be implemented with the following jerk circuit; the required nonlinearity is brought about by the two diodes:
In the above circuit, all resistors are of equal value, except , and all capacitors are of equal size. The dominant frequency is . The output of op amp 0 will correspond to the x variable, the output of 1 corresponds to the first derivative of x and the output of 2 corresponds to the second derivative.
Similar circuits only require one diode or no diodes at all.
See also the well-known Chua's circuit, one basis for chaotic true random number generators. The ease of construction of the circuit has made it a ubiquitous real-world example of a chaotic system.
Spontaneous order
Under the right conditions, chaos spontaneously evolves into a lockstep pattern. In the Kuramoto model, four conditions suffice to produce synchronization in a chaotic system.
Examples include the coupled oscillation of Christiaan Huygens' pendulums, fireflies, neurons, the London Millennium Bridge resonance, and large arrays of Josephson junctions.
Moreover, from the theoretical physics standpoint, dynamical chaos itself, in its most general manifestation, is a spontaneous order. The essence here is that most orders in nature arise from the spontaneous breakdown of various symmetries. This large family of phenomena includes elasticity, superconductivity, ferromagnetism, and many others. According to the supersymmetric theory of stochastic dynamics, chaos, or more precisely, its stochastic generalization, is also part of this family. The corresponding symmetry being broken is the topological supersymmetry which is hidden in all stochastic (partial) differential equations, and the corresponding order parameter is a field-theoretic embodiment of the butterfly effect.
History
James Clerk Maxwell first emphasized the "butterfly effect", and is seen as being one of the earliest to discuss chaos theory, with work in the 1860s and 1870s. An early proponent of chaos theory was Henri Poincaré. In the 1880s, while studying the three-body problem, he found that there can be orbits that are nonperiodic, and yet not forever increasing nor approaching a fixed point. In 1898, Jacques Hadamard published an influential study of the chaotic motion of a free particle gliding frictionlessly on a surface of constant negative curvature, called "Hadamard's billiards". Hadamard was able to show that all trajectories are unstable, in that all particle trajectories diverge exponentially from one another, with a positive Lyapunov exponent.
Chaos theory began in the field of ergodic theory. Later studies, also on the topic of nonlinear differential equations, were carried out by George David Birkhoff, Andrey Nikolaevich Kolmogorov, Mary Lucy Cartwright and John Edensor Littlewood, and Stephen Smale. Although chaotic planetary motion had not been observed, experimentalists had encountered turbulence in fluid motion and nonperiodic oscillation in radio circuits without the benefit of a theory to explain what they were seeing.
Despite initial insights in the first half of the twentieth century, chaos theory became formalized as such only after mid-century, when it first became evident to some scientists that linear theory, the prevailing system theory at that time, simply could not explain the observed behavior of certain experiments like that of the logistic map. What had been attributed to measure imprecision and simple "noise" was considered by chaos theorists as a full component of the studied systems. In 1959 Boris Valerianovich Chirikov proposed a criterion for the emergence of classical chaos in Hamiltonian systems (Chirikov criterion). He applied this criterion to explain some experimental results on plasma confinement in open mirror traps. This is regarded as the very first physical theory of chaos, which succeeded in explaining a concrete experiment. And Boris Chirikov himself is considered as a pioneer in classical and quantum chaos.
The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics of chaos theory involves the repeated iteration of simple mathematical formulas, which would be impractical to do by hand. Electronic computers made these repeated calculations practical, while figures and images made it possible to visualize these systems. As a graduate student in Chihiro Hayashi's laboratory at Kyoto University, Yoshisuke Ueda was experimenting with analog computers and noticed, on November 27, 1961, what he called "randomly transitional phenomena". Yet his advisor did not agree with his conclusions at the time, and did not allow him to report his findings until 1970.
Edward Lorenz was an early pioneer of the theory. His interest in chaos came about accidentally through his work on weather prediction in 1961. Lorenz and his collaborator Ellen Fetter and Margaret Hamilton were using a simple digital computer, a Royal McBee LGP-30, to run weather simulations. They wanted to see a sequence of data again, and to save time they started the simulation in the middle of its course. They did this by entering a printout of the data that corresponded to conditions in the middle of the original simulation. To their surprise, the weather the machine began to predict was completely different from the previous calculation. They tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 printed as 0.506. This difference is tiny, and the consensus at the time would have been that it should have no practical effect. However, Lorenz discovered that small changes in initial conditions produced large changes in long-term outcome. Lorenz's discovery, which gave its name to Lorenz attractors, showed that even detailed atmospheric modeling cannot, in general, make precise long-term weather predictions.
In 1963, Benoit Mandelbrot, studying information theory, discovered that noise in many phenomena (including stock prices and telephone circuits) was patterned like a Cantor set, a set of points with infinite roughness and detail Mandelbrot described both the "Noah effect" (in which sudden discontinuous changes can occur) and the "Joseph effect" (in which persistence of a value can occur for a while, yet suddenly change afterwards). In 1967, he published "How long is the coast of Britain? Statistical self-similarity and fractional dimension", showing that a coastline's length varies with the scale of the measuring instrument, resembles itself at all scales, and is infinite in length for an infinitesimally small measuring device. Arguing that a ball of twine appears as a point when viewed from far away (0-dimensional), a ball when viewed from fairly near (3-dimensional), or a curved strand (1-dimensional), he argued that the dimensions of an object are relative to the observer and may be fractional. An object whose irregularity is constant over different scales ("self-similarity") is a fractal (examples include the Menger sponge, the Sierpiński gasket, and the Koch curve or snowflake, which is infinitely long yet encloses a finite space and has a fractal dimension of circa 1.2619). In 1982, Mandelbrot published The Fractal Geometry of Nature, which became a classic of chaos theory.
In December 1977, the New York Academy of Sciences organized the first symposium on chaos, attended by David Ruelle, Robert May, James A. Yorke (coiner of the term "chaos" as used in mathematics), Robert Shaw, and the meteorologist Edward Lorenz. The following year Pierre Coullet and Charles Tresser published "Itérations d'endomorphismes et groupe de renormalisation", and Mitchell Feigenbaum's article "Quantitative Universality for a Class of Nonlinear Transformations" finally appeared in a journal, after 3 years of referee rejections. Thus Feigenbaum (1975) and Coullet & Tresser (1978) discovered the universality in chaos, permitting the application of chaos theory to many different phenomena.
In 1979, Albert J. Libchaber, during a symposium organized in Aspen by Pierre Hohenberg, presented his experimental observation of the bifurcation cascade that leads to chaos and turbulence in Rayleigh–Bénard convection systems. He was awarded the Wolf Prize in Physics in 1986 along with Mitchell J. Feigenbaum for their inspiring achievements.
In 1986, the New York Academy of Sciences co-organized with the National Institute of Mental Health and the Office of Naval Research the first important conference on chaos in biology and medicine. There, Bernardo Huberman presented a mathematical model of the eye tracking dysfunction among people with schizophrenia. This led to a renewal of physiology in the 1980s through the application of chaos theory, for example, in the study of pathological cardiac cycles.
In 1987, Per Bak, Chao Tang and Kurt Wiesenfeld published a paper in Physical Review Letters describing for the first time self-organized criticality (SOC), considered one of the mechanisms by which complexity arises in nature.
Alongside largely lab-based approaches such as the Bak–Tang–Wiesenfeld sandpile, many other investigations have focused on large-scale natural or social systems that are known (or suspected) to display scale-invariant behavior. Although these approaches were not always welcomed (at least initially) by specialists in the subjects examined, SOC has nevertheless become established as a strong candidate for explaining a number of natural phenomena, including earthquakes, (which, long before SOC was discovered, were known as a source of scale-invariant behavior such as the Gutenberg–Richter law describing the statistical distribution of earthquake sizes, and the Omori law describing the frequency of aftershocks), solar flares, fluctuations in economic systems such as financial markets (references to SOC are common in econophysics), landscape formation, forest fires, landslides, epidemics, and biological evolution (where SOC has been invoked, for example, as the dynamical mechanism behind the theory of "punctuated equilibria" put forward by Niles Eldredge and Stephen Jay Gould). Given the implications of a scale-free distribution of event sizes, some researchers have suggested that another phenomenon that should be considered an example of SOC is the occurrence of wars. These investigations of SOC have included both attempts at modelling (either developing new models or adapting existing ones to the specifics of a given natural system), and extensive data analysis to determine the existence and/or characteristics of natural scaling laws.
Also in 1987 James Gleick published Chaos: Making a New Science, which became a best-seller and introduced the general principles of chaos theory as well as its history to the broad public. Initially the domain of a few, isolated individuals, chaos theory progressively emerged as a transdisciplinary and institutional discipline, mainly under the name of nonlinear systems analysis. Alluding to Thomas Kuhn's concept of a paradigm shift exposed in The Structure of Scientific Revolutions (1962), many "chaologists" (as some described themselves) claimed that this new theory was an example of such a shift, a thesis upheld by Gleick.
The availability of cheaper, more powerful computers broadens the applicability of chaos theory. Currently, chaos theory remains an active area of research, involving many different disciplines such as mathematics, topology, physics, social systems, population modeling, biology, meteorology, astrophysics, information theory, computational neuroscience, pandemic crisis management, etc.
Lorenz's pioneering contributions to chaotic modeling
Throughout his career, Professor Edward Lorenz authored a total of 61 research papers, out of which 58 were solely authored by him. Commencing with the 1960 conference in Japan, Lorenz embarked on a journey of developing diverse models aimed at uncovering the SDIC and chaotic features. A recent review of Lorenz's model progression spanning from 1960 to 2008 revealed his adeptness at employing varied physical systems to illustrate chaotic phenomena. These systems encompassed Quasi-geostrophic systems, the Conservative Vorticity Equation, the Rayleigh-Bénard Convection Equations, and the Shallow Water Equations. Moreover, Lorenz can be credited with the early application of the logistic map to explore chaotic solutions, a milestone he achieved ahead of his colleagues (e.g. Lorenz 1964).
In 1972, Lorenz coined the term "butterfly effect" as a metaphor to discuss whether a small perturbation could eventually create a tornado with a three-dimensional, organized, and coherent structure. While connected to the original butterfly effect based on sensitive dependence on initial conditions, its metaphorical variant carries distinct nuances. To commemorate this milestone, a reprint book containing invited papers that deepen our understanding of both butterfly effects was officially published to celebrate the 50th anniversary of the metaphorical butterfly effect.
A popular but inaccurate analogy for chaos
The sensitive dependence on initial conditions (i.e., butterfly effect) has been illustrated using the following folklore:
For want of a nail, the shoe was lost.
For want of a shoe, the horse was lost.
For want of a horse, the rider was lost.
For want of a rider, the battle was lost.
For want of a battle, the kingdom was lost.
And all for the want of a horseshoe nail.
Based on the above, many people mistakenly believe that the impact of a tiny initial perturbation monotonically increases with time and that any tiny perturbation can eventually produce a large impact on numerical integrations. However, in 2008, Lorenz stated that he did not feel that this verse described true chaos but that it better illustrated the simpler phenomenon of instability and that the verse implicitly suggests that subsequent small events will not reverse the outcome. Based on the analysis, the verse only indicates divergence, not boundedness. Boundedness is important for the finite size of a butterfly pattern. In a recent study, the characteristic of the aforementioned verse was recently denoted as "finite-time sensitive dependence".
Applications
Although chaos theory was born from observing weather patterns, it has become applicable to a variety of other situations. Some areas benefiting from chaos theory today are geology, mathematics, biology, computer science, economics, engineering, finance, meteorology, philosophy, anthropology, physics, politics, population dynamics, and robotics. A few categories are listed below with examples, but this is by no means a comprehensive list as new applications are appearing.
Cryptography
Chaos theory has been used for many years in cryptography. In the past few decades, chaos and nonlinear dynamics have been used in the design of hundreds of cryptographic primitives. These algorithms include image encryption algorithms, hash functions, secure pseudo-random number generators, stream ciphers, watermarking, and steganography. The majority of these algorithms are based on uni-modal chaotic maps and a big portion of these algorithms use the control parameters and the initial condition of the chaotic maps as their keys. From a wider perspective, without loss of generality, the similarities between the chaotic maps and the cryptographic systems is the main motivation for the design of chaos based cryptographic algorithms. One type of encryption, secret key or symmetric key, relies on diffusion and confusion, which is modeled well by chaos theory. Another type of computing, DNA computing, when paired with chaos theory, offers a way to encrypt images and other information. Many of the DNA-Chaos cryptographic algorithms are proven to be either not secure, or the technique applied is suggested to be not efficient.
Robotics
Robotics is another area that has recently benefited from chaos theory. Instead of robots acting in a trial-and-error type of refinement to interact with their environment, chaos theory has been used to build a predictive model.
Chaotic dynamics have been exhibited by passive walking biped robots.
Biology
For over a hundred years, biologists have been keeping track of populations of different species with population models. Most models are continuous, but recently scientists have been able to implement chaotic models in certain populations. For example, a study on models of Canadian lynx showed there was chaotic behavior in the population growth. Chaos can also be found in ecological systems, such as hydrology. While a chaotic model for hydrology has its shortcomings, there is still much to learn from looking at the data through the lens of chaos theory. Another biological application is found in cardiotocography. Fetal surveillance is a delicate balance of obtaining accurate information while being as noninvasive as possible. Better models of warning signs of fetal hypoxia can be obtained through chaotic modeling.
As Perry points out, modeling of chaotic time series in ecology is helped by constraint. There is always potential difficulty in distinguishing real chaos from chaos that is only in the model. Hence both constraint in the model and or duplicate time series data for comparison will be helpful in constraining the model to something close to the reality, for example Perry & Wall 1984. Gene-for-gene co-evolution sometimes shows chaotic dynamics in allele frequencies. Adding variables exaggerates this: Chaos is more common in models incorporating additional variables to reflect additional facets of real populations. Robert M. May himself did some of these foundational crop co-evolution studies, and this in turn helped shape the entire field. Even for a steady environment, merely combining one crop and one pathogen may result in quasi-periodic- or chaotic- oscillations in pathogen population.
Economics
It is possible that economic models can also be improved through an application of chaos theory, but predicting the health of an economic system and what factors influence it most is an extremely complex task. Economic and financial systems are fundamentally different from those in the classical natural sciences since the former are inherently stochastic in nature, as they result from the interactions of people, and thus pure deterministic models are unlikely to provide accurate representations of the data. The empirical literature that tests for chaos in economics and finance presents very mixed results, in part due to confusion between specific tests for chaos and more general tests for non-linear relationships.
Chaos could be found in economics by the means of recurrence quantification analysis. In fact, Orlando et al. by the means of the so-called recurrence quantification correlation index were able detect hidden changes in time series. Then, the same technique was employed to detect transitions from laminar (regular) to turbulent (chaotic) phases as well as differences between macroeconomic variables and highlight hidden features of economic dynamics. Finally, chaos theory could help in modeling how an economy operates as well as in embedding shocks due to external events such as COVID-19.
Finite predictability in weather and climate
Due to the sensitive dependence of solutions on initial conditions (SDIC), also known as the butterfly effect, chaotic systems like the Lorenz 1963 model imply a finite predictability horizon. This means that while accurate predictions are possible over a finite time period, they are not feasible over an infinite time span. Considering the nature of Lorenz's chaotic solutions, the committee led by Charney et al. in 1966 extrapolated a doubling time of five days from a general circulation model, suggesting a predictability limit of two weeks. This connection between the five-day doubling time and the two-week predictability limit was also recorded in a 1969 report by the Global Atmospheric Research Program (GARP). To acknowledge the combined direct and indirect influences from the Mintz and Arakawa model and Lorenz's models, as well as the leadership of Charney et al., Shen et al. refer to the two-week predictability limit as the "Predictability Limit Hypothesis," drawing an analogy to Moore's Law.
AI-extended modeling framework
In AI-driven large language models, responses can exhibit sensitivities to factors like alterations in formatting and variations in prompts. These sensitivities are akin to butterfly effects. Although classifying AI-powered large language models as classical deterministic chaotic systems poses challenges, chaos-inspired approaches and techniques (such as ensemble modeling) may be employed to extract reliable information from these expansive language models (see also "Butterfly Effect in Popular Culture").
Other areas
In chemistry, predicting gas solubility is essential to manufacturing polymers, but models using particle swarm optimization (PSO) tend to converge to the wrong points. An improved version of PSO has been created by introducing chaos, which keeps the simulations from getting stuck. In celestial mechanics, especially when observing asteroids, applying chaos theory leads to better predictions about when these objects will approach Earth and other planets. Four of the five moons of Pluto rotate chaotically. In quantum physics and electrical engineering, the study of large arrays of Josephson junctions benefitted greatly from chaos theory. Closer to home, coal mines have always been dangerous places where frequent natural gas leaks cause many deaths. Until recently, there was no reliable way to predict when they would occur. But these gas leaks have chaotic tendencies that, when properly modeled, can be predicted fairly accurately.
Chaos theory can be applied outside of the natural sciences, but historically nearly all such studies have suffered from lack of reproducibility; poor external validity; and/or inattention to cross-validation, resulting in poor predictive accuracy (if out-of-sample prediction has even been attempted). Glass and Mandell and Selz have found that no EEG study has as yet indicated the presence of strange attractors or other signs of chaotic behavior.
Redington and Reidbord (1992) attempted to demonstrate that the human heart could display chaotic traits. They monitored the changes in between-heartbeat intervals for a single psychotherapy patient as she moved through periods of varying emotional intensity during a therapy session. Results were admittedly inconclusive. Not only were there ambiguities in the various plots the authors produced to purportedly show evidence of chaotic dynamics (spectral analysis, phase trajectory, and autocorrelation plots), but also when they attempted to compute a Lyapunov exponent as more definitive confirmation of chaotic behavior, the authors found they could not reliably do so.
In their 1995 paper, Metcalf and Allen maintained that they uncovered in animal behavior a pattern of period doubling leading to chaos. The authors examined a well-known response called schedule-induced polydipsia, by which an animal deprived of food for certain lengths of time will drink unusual amounts of water when the food is at last presented. The control parameter (r) operating here was the length of the interval between feedings, once resumed. The authors were careful to test a large number of animals and to include many replications, and they designed their experiment so as to rule out the likelihood that changes in response patterns were caused by different starting places for r.
Time series and first delay plots provide the best support for the claims made, showing a fairly clear march from periodicity to irregularity as the feeding times were increased. The various phase trajectory plots and spectral analyses, on the other hand, do not match up well enough with the other graphs or with the overall theory to lead inexorably to a chaotic diagnosis. For example, the phase trajectories do not show a definite progression towards greater and greater complexity (and away from periodicity); the process seems quite muddied. Also, where Metcalf and Allen saw periods of two and six in their spectral plots, there is room for alternative interpretations. All of this ambiguity necessitate some serpentine, post-hoc explanation to show that results fit a chaotic model.
By adapting a model of career counseling to include a chaotic interpretation of the relationship between employees and the job market, Amundson and Bright found that better suggestions can be made to people struggling with career decisions. Modern organizations are increasingly seen as open complex adaptive systems with fundamental natural nonlinear structures, subject to internal and external forces that may contribute chaos. For instance, team building and group development is increasingly being researched as an inherently unpredictable system, as the uncertainty of different individuals meeting for the first time makes the trajectory of the team unknowable.
Traffic forecasting may benefit from applications of chaos theory. Better predictions of when a congestion will occur would allow measures to be taken to disperse it before it would have occurred. Combining chaos theory principles with a few other methods has led to a more accurate short-term prediction model (see the plot of the BML traffic model at right).
Chaos theory has been applied to environmental water cycle data (also hydrological data), such as rainfall and streamflow. These studies have yielded controversial results, because the methods for detecting a chaotic signature are often relatively subjective. Early studies tended to "succeed" in finding chaos, whereas subsequent studies and meta-analyses called those studies into question and provided explanations for why these datasets are not likely to have low-dimension chaotic dynamics.
See also
Examples of chaotic systems
Advected contours
Arnold's cat map
Bifurcation theory
Bouncing ball dynamics
Chua's circuit
Cliodynamics
Coupled map lattice
Double pendulum
Duffing equation
Dynamical billiards
Economic bubble
Gaspard-Rice system
Hénon map
Horseshoe map
List of chaotic maps
Rössler attractor
Standard map
Swinging Atwood's machine
Tilt A Whirl
Other related topics
Amplitude death
Anosov diffeomorphism
Catastrophe theory
Causality
Chaos as topological supersymmetry breaking
Chaos machine
Chaotic mixing
Chaotic scattering
Control of chaos
Determinism
Edge of chaos
Emergence
Mandelbrot set
Kolmogorov–Arnold–Moser theorem
Ill-conditioning
Ill-posedness
Nonlinear system
Patterns in nature
Predictability
Quantum chaos
Santa Fe Institute
Shadowing lemma
Synchronization of chaos
Unintended consequence
People
Ralph Abraham
Michael Berry
Leon O. Chua
Ivar Ekeland
Doyne Farmer
Martin Gutzwiller
Brosl Hasslacher
Michel Hénon
Aleksandr Lyapunov
Norman Packard
Otto Rössler
David Ruelle
Oleksandr Mikolaiovich Sharkovsky
Robert Shaw
Floris Takens
James A. Yorke
George M. Zaslavsky
References
Further reading
Articles
Online version (Note: the volume and page citation cited for the online text differ from that cited here. The citation here is from a photocopy, which is consistent with other citations found online that don't provide article views. The online content is identical to the hardcopy text. Citation variations are related to country of publication).
Textbooks
Semitechnical and popular works
Christophe Letellier, Chaos in Nature, World Scientific Publishing Company, 2012, .
John Briggs and David Peat, Turbulent Mirror: : An Illustrated Guide to Chaos Theory and the Science of Wholeness, Harper Perennial 1990, 224 pp.
John Briggs and David Peat, Seven Life Lessons of Chaos: Spiritual Wisdom from the Science of Change, Harper Perennial 2000, 224 pp.
Predrag Cvitanović, Universality in Chaos, Adam Hilger 1989, 648 pp.
Leon Glass and Michael C. Mackey, From Clocks to Chaos: The Rhythms of Life, Princeton University Press 1988, 272 pp.
James Gleick, Chaos: Making a New Science, New York: Penguin, 1988. 368 pp.
L Douglas Kiel, Euel W Elliott (ed.), Chaos Theory in the Social Sciences: Foundations and Applications, University of Michigan Press, 1997, 360 pp.
Arvind Kumar, Chaos, Fractals and Self-Organisation; New Perspectives on Complexity in Nature , National Book Trust, 2003.
Hans Lauwerier, Fractals, Princeton University Press, 1991.
Edward Lorenz, The Essence of Chaos, University of Washington Press, 1996.
David Peak and Michael Frame, Chaos Under Control: The Art and Science of Complexity, Freeman, 1994.
Heinz-Otto Peitgen and Dietmar Saupe (Eds.), The Science of Fractal Images, Springer 1988, 312 pp.
Nuria Perpinya, Caos, virus, calma. La Teoría del Caos aplicada al desórden artístico, social y político, Páginas de Espuma, 2021.
Clifford A. Pickover, Computers, Pattern, Chaos, and Beauty: Graphics from an Unseen World , St Martins Pr 1991.
Clifford A. Pickover, Chaos in Wonderland: Visual Adventures in a Fractal World, St Martins Pr 1994.
Ilya Prigogine and Isabelle Stengers, Order Out of Chaos, Bantam 1984.
David Ruelle, Chance and Chaos, Princeton University Press 1993.
Ivars Peterson, Newton's Clock: Chaos in the Solar System, Freeman, 1993.
Manfred Schroeder, Fractals, Chaos, and Power Laws, Freeman, 1991.
Ian Stewart, Does God Play Dice?: The Mathematics of Chaos , Blackwell Publishers, 1990.
Steven Strogatz, Sync: The emerging science of spontaneous order, Hyperion, 2003.
Yoshisuke Ueda, The Road To Chaos, Aerial Pr, 1993.
M. Mitchell Waldrop, Complexity : The Emerging Science at the Edge of Order and Chaos, Simon & Schuster, 1992.
Antonio Sawaya, Financial Time Series Analysis : Chaos and Neurodynamics Approach, Lambert, 2012.
External links
Nonlinear Dynamics Research Group with Animations in Flash
The Chaos group at the University of Maryland
The Chaos Hypertextbook. An introductory primer on chaos and fractals
ChaosBook.org An advanced graduate textbook on chaos (no fractals)
Society for Chaos Theory in Psychology & Life Sciences
Nonlinear Dynamics Research Group at CSDC, Florence, Italy
Nonlinear dynamics: how science comprehends chaos, talk presented by Sunny Auyang, 1998.
Nonlinear Dynamics. Models of bifurcation and chaos by Elmer G. Wiens
Gleick's Chaos (excerpt)
Systems Analysis, Modelling and Prediction Group at the University of Oxford
A page about the Mackey-Glass equation
High Anxieties — The Mathematics of Chaos (2008) BBC documentary directed by David Malone
The chaos theory of evolution – article published in Newscientist featuring similarities of evolution and non-linear systems including fractal nature of life and chaos.
Jos Leys, Étienne Ghys et Aurélien Alvarez, Chaos, A Mathematical Adventure. Nine films about dynamical systems, the butterfly effect and chaos theory, intended for a wide audience.
"Chaos Theory", BBC Radio 4 discussion with Susan Greenfield, David Papineau & Neil Johnson (In Our Time, May 16, 2002)
Chaos: The Science of the Butterfly Effect (2019) an explanation presented by Derek Muller
Copyright note
Complex systems theory
Computational fields of study | Chaos theory | [
"Technology"
] | 9,685 | [
"Computational fields of study",
"Computing and society"
] |
6,312 | https://en.wikipedia.org/wiki/Cell%20wall | A cell wall is a structural layer that surrounds some cell types, found immediately outside the cell membrane. It can be tough, flexible, and sometimes rigid. Primarily, it provides the cell with structural support, shape, protection, and functions as a selective barrier. Another vital role of the cell wall is to help the cell withstand osmotic pressure and mechanical stress. While absent in many eukaryotes, including animals, cell walls are prevalent in other organisms such as fungi, algae and plants, and are commonly found in most prokaryotes, with the exception of mollicute bacteria.
The composition of cell walls varies across taxonomic groups, species, cell type, and the cell cycle. In land plants, the primary cell wall comprises polysaccharides like cellulose, hemicelluloses, and pectin. Often, other polymers such as lignin, suberin or cutin are anchored to or embedded in plant cell walls. Algae exhibit cell walls composed of glycoproteins and polysaccharides, such as carrageenan and agar, distinct from those in land plants. Bacterial cell walls contain peptidoglycan, while archaeal cell walls vary in composition, potentially consisting of glycoprotein S-layers, pseudopeptidoglycan, or polysaccharides. Fungi possess cell walls constructed from the polymer chitin, specifically N-acetylglucosamine. diatoms have a unique cell wall composed of biogenic silica.
History
A plant cell wall was first observed and named (simply as a "wall") by Robert Hooke in 1665. However, "the dead excrusion product of the living protoplast" was forgotten, for almost three centuries, being the subject of scientific interest mainly as a resource for industrial processing or in relation to animal or human health.
In 1804, Karl Rudolphi and J.H.F. Link proved that cells had independent cell walls. Before, it had been thought that cells shared walls and that fluid passed between them this way.
The mode of formation of the cell wall was controversial in the 19th century. Hugo von Mohl (1853, 1858) advocated the idea that the cell wall grows by apposition. Carl Nägeli (1858, 1862, 1863) believed that the growth of the wall in thickness and in area was due to a process termed intussusception. Each theory was improved in the following decades: the apposition (or lamination) theory by Eduard Strasburger (1882, 1889), and the intussusception theory by Julius Wiesner (1886).
In 1930, Ernst Münch coined the term apoplast in order to separate the "living" symplast from the "dead" plant region, the latter of which included the cell wall.
By the 1980s, some authors suggested replacing the term "cell wall", particularly as it was used for plants, with the more precise term "extracellular matrix", as used for animal cells, but others preferred the older term.
Properties
Cell walls serve similar purposes in those organisms that possess them. They may give cells rigidity and strength, offering protection against mechanical stress. The chemical composition and mechanical properties of the cell wall are linked with plant cell growth and morphogenesis. In multicellular organisms, they permit the organism to build and hold a definite shape. Cell walls also limit the entry of large molecules that may be toxic to the cell. They further permit the creation of stable osmotic environments by preventing osmotic lysis and helping to retain water. Their composition, properties, and form may change during the cell cycle and depend on growth conditions.
Rigidity of cell walls
In most cells, the cell wall is flexible, meaning that it will bend rather than holding a fixed shape, but has considerable tensile strength. The apparent rigidity of primary plant tissues is enabled by cell walls, but is not due to the walls' stiffness. Hydraulic turgor pressure creates this rigidity, along with the wall structure. The flexibility of the cell walls is seen when plants wilt, so that the stems and leaves begin to droop, or in seaweeds that bend in water currents. As John Howland explains
The apparent rigidity of the cell wall thus results from inflation of the cell contained within. This inflation is a result of the passive uptake of water.
In plants, a secondary cell wall is a thicker additional layer of cellulose which increases wall rigidity. Additional layers may be formed by lignin in xylem cell walls, or suberin in cork cell walls. These compounds are rigid and waterproof, making the secondary wall stiff. Both wood and bark cells of trees have secondary walls. Other parts of plants such as the leaf stalk may acquire similar reinforcement to resist the strain of physical forces.
Permeability
The primary cell wall of most plant cells is freely permeable to small molecules including small proteins, with size exclusion estimated to be 30-60 kDa. The pH is an important factor governing the transport of molecules through cell walls.
Evolution
Cell walls evolved independently in many groups.
The photosynthetic eukaryotes (so-called plant and algae) is one group with cellulose cell walls, where the cell wall is closely related to the evolution of multicellularity, terrestrialization and vascularization. The CesA cellulose synthase evolved in Cyanobacteria and was part of Archaeplastida since endosymbiosis; secondary endosymbiosis events transferred it (with the arabinogalactan proteins) further into brown algae and oomycetes. Plants later evolved various genes from CesA, including the Csl (cellulose synthase-like) family of proteins and additional Ces proteins. Combined with the various glycosyltransferases (GT), they enable more complex chemical structures to be built.
Fungi use a chitin-glucan-protein cell wall. They share the 1,3-β-glucan synthesis pathway with plants, using homologous GT48 family 1,3-Beta-glucan synthases to perform the task, suggesting that such an enzyme is very ancient within the eukaryotes. Their glycoproteins are rich in mannose. The cell wall might have evolved to deter viral infections. Proteins embedded in cell walls are variable, contained in tandem repeats subject to homologous recombination. An alternative scenario is that fungi started with a chitin-based cell wall and later acquired the GT-48 enzymes for the 1,3-β-glucans via horizontal gene transfer. The pathway leading to 1,6-β-glucan synthesis is not sufficiently known in either case.
Plant cell walls
The walls of plant cells must have sufficient tensile strength to withstand internal osmotic pressures of several times atmospheric pressure that result from the difference in solute concentration between the cell interior and external solutions. Plant cell walls vary from 0.1 to several μm in thickness.
Layers
Up to three strata or layers may be found in plant cell walls:
The primary cell wall, generally a thin, flexible and extensible layer formed while the cell is growing.
The secondary cell wall, a thick layer formed inside the primary cell wall after the cell is fully grown. It is not found in all cell types. Some cells, such as the conducting cells in xylem, possess a secondary wall containing lignin, which strengthens and waterproofs the wall.
The middle lamella, a layer rich in pectins. This outermost layer forms the interface between adjacent plant cells and glues them together.
Composition
In the primary (growing) plant cell wall, the major carbohydrates are cellulose, hemicellulose and pectin. The cellulose microfibrils are linked via hemicellulosic tethers to form the cellulose-hemicellulose network, which is embedded in the pectin matrix. The most common hemicellulose in the primary cell wall is xyloglucan. In grass cell walls, xyloglucan and pectin are reduced in abundance and partially replaced by glucuronoarabinoxylan, another type of hemicellulose. Primary cell walls characteristically extend (grow) by a mechanism called acid growth, mediated by expansins, extracellular proteins activated by acidic conditions that modify the hydrogen bonds between pectin and cellulose. This functions to increase cell wall extensibility. The outer part of the primary cell wall of the plant epidermis is usually impregnated with cutin and wax, forming a permeability barrier known as the plant cuticle.
Secondary cell walls contain a wide range of additional compounds that modify their mechanical properties and permeability. The major polymers that make up wood (largely secondary cell walls) include:
cellulose, 35-50%
xylan, 20-35%, a type of hemicellulose
lignin, 10-25%, a complex phenolic polymer that penetrates the spaces in the cell wall between cellulose, hemicellulose and pectin components, driving out water and strengthening the wall.
Additionally, structural proteins (1-5%) are found in most plant cell walls; they are classified as hydroxyproline-rich glycoproteins (HRGP), arabinogalactan proteins (AGP), glycine-rich proteins (GRPs), and proline-rich proteins (PRPs). Each class of glycoprotein is defined by a characteristic, highly repetitive protein sequence. Most are glycosylated, contain hydroxyproline (Hyp) and become cross-linked in the cell wall. These proteins are often concentrated in specialized cells and in cell corners. Cell walls of the epidermis may contain cutin. The Casparian strip in the endodermis roots and cork cells of plant bark contain suberin. Both cutin and suberin are polyesters that function as permeability barriers to the movement of water. The relative composition of carbohydrates, secondary compounds and proteins varies between plants and between the cell type and age. Plant cells walls also contain numerous enzymes, such as hydrolases, esterases, peroxidases, and transglycosylases, that cut, trim and cross-link wall polymers.
Secondary walls - especially in grasses - may also contain microscopic silica crystals, which may strengthen the wall and protect it from herbivores.
Cell walls in some plant tissues also function as storage deposits for carbohydrates that can be broken down and resorbed to supply the metabolic and growth needs of the plant. For example, endosperm cell walls in the seeds of cereal grasses, nasturtium
and other species, are rich in glucans and other polysaccharides that are readily digested by enzymes during seed germination to form simple sugars that nourish the growing embryo.
Formation
The middle lamella is laid down first, formed from the cell plate during cytokinesis, and the primary cell wall is then deposited inside the middle lamella. The actual structure of the cell wall is not clearly defined and several models exist - the covalently linked cross model, the tether model, the diffuse layer model and the stratified layer model. However, the primary cell wall, can be defined as composed of cellulose microfibrils aligned at all angles. Cellulose microfibrils are produced at the plasma membrane by the cellulose synthase complex, which is proposed to be made of a hexameric rosette that contains three cellulose synthase catalytic subunits for each of the six units. Microfibrils are held together by hydrogen bonds to provide a high tensile strength. The cells are held together and share the gelatinous membrane (the middle lamella), which contains magnesium and calcium pectates (salts of pectic acid). Cells interact though plasmodesmata, which are inter-connecting channels of cytoplasm that connect to the protoplasts of adjacent cells across the cell wall.
In some plants and cell types, after a maximum size or point in development has been reached, a secondary wall is constructed between the plasma membrane and primary wall. Unlike the primary wall, the cellulose microfibrils are aligned parallel in layers, the orientation changing slightly with each additional layer so that the structure becomes helicoidal. Cells with secondary cell walls can be rigid, as in the gritty sclereid cells in pear and quince fruit. Cell to cell communication is possible through pits in the secondary cell wall that allow plasmodesmata to connect cells through the secondary cell walls.
Fungal cell walls
There are several groups of organisms that have been called "fungi". Some of these groups (Oomycete and Myxogastria) have been transferred out of the Kingdom Fungi, in part because of fundamental biochemical differences in the composition of the cell wall. Most true fungi have a cell wall consisting largely of chitin and other polysaccharides. True fungi do not have cellulose in their cell walls.
In fungi, the cell wall is the outer-most layer, external to the plasma membrane. The fungal cell wall is a matrix of three main components:
chitin: polymers consisting mainly of unbranched chains of β-(1,4)-linked-N-Acetylglucosamine in the Ascomycota and Basidiomycota, or poly-β-(1,4)-linked-N-Acetylglucosamine (chitosan) in the Zygomycota. Both chitin and chitosan are synthesized and extruded at the plasma membrane.
glucans: glucose polymers that function to cross-link chitin or chitosan polymers. β-glucans are glucose molecules linked via β-(1,3)- or β-(1,6)- bonds and provide rigidity to the cell wall while α-glucans are defined by α-(1,3)- and/or α-(1,4) bonds and function as part of the matrix.
proteins: enzymes necessary for cell wall synthesis and lysis in addition to structural proteins are all present in the cell wall. Most of the structural proteins found in the cell wall are glycosylated and contain mannose, thus these proteins are called mannoproteins or mannans.
Other eukaryotic cell walls
Algae
Like plants, algae have cell walls. Algal cell walls contain either polysaccharides (such as cellulose (a glucan)) or a variety of glycoproteins (Volvocales) or both. The inclusion of additional polysaccharides in algal cells walls is used as a feature for algal taxonomy.
Mannans: They form microfibrils in the cell walls of a number of marine green algae including those from the genera, Codium, Dasycladus, and Acetabularia as well as in the walls of some red algae, like Porphyra and Bangia.
Xylans:
Alginic acid: It is a common polysaccharide in the cell walls of brown algae.
Sulfonated polysaccharides: They occur in the cell walls of most algae; those common in red algae include agarose, carrageenan, porphyran, furcelleran and funoran.
Other compounds that may accumulate in algal cell walls include sporopollenin and calcium ions.
The group of algae known as the diatoms synthesize their cell walls (also known as frustules or valves) from silicic acid. Significantly, relative to the organic cell walls produced by other groups, silica frustules require less energy to synthesize (approximately 8%), potentially a major saving on the overall cell energy budget and possibly an explanation for higher growth rates in diatoms.
In brown algae, phlorotannins may be a constituent of the cell walls.
Water molds
The group Oomycetes, also known as water molds, are saprotrophic plant pathogens like fungi. Until recently they were widely believed to be fungi, but structural and molecular evidence has led to their reclassification as heterokonts, related to autotrophic brown algae and diatoms. Unlike fungi, oomycetes typically possess cell walls of cellulose and glucans rather than chitin, although some genera (such as Achlya and Saprolegnia) do have chitin in their walls. The fraction of cellulose in the walls is no more than 4 to 20%, far less than the fraction of glucans. Oomycete cell walls also contain the amino acid hydroxyproline, which is not found in fungal cell walls.
Slime molds
The dictyostelids are another group formerly classified among the fungi. They are slime molds that feed as unicellular amoebae, but aggregate into a reproductive stalk and sporangium under certain conditions. Cells of the reproductive stalk, as well as the spores formed at the apex, possess a cellulose wall. The spore wall has three layers, the middle one composed primarily of cellulose, while the innermost is sensitive to cellulase and pronase.
Prokaryotic cell walls
Bacterial cell walls
Around the outside of the cell membrane is the bacterial cell wall. Bacterial cell walls are made of peptidoglycan (also called murein), which is made from polysaccharide chains cross-linked by unusual peptides containing D-amino acids. Bacterial cell walls are different from the cell walls of plants and fungi which are made of cellulose and chitin, respectively. The cell wall of bacteria is also distinct from that of Archaea, which do not contain peptidoglycan. The cell wall is essential to the survival of many bacteria, although L-form bacteria can be produced in the laboratory that lack a cell wall. The antibiotic penicillin is able to kill bacteria by preventing the cross-linking of peptidoglycan and this causes the cell wall to weaken and lyse. The lysozyme enzyme can also damage bacterial cell walls.
There are broadly speaking two different types of cell wall in bacteria, called gram-positive and gram-negative. The names originate from the reaction of cells to the Gram stain, a test long-employed for the classification of bacterial species.
Gram-positive bacteria possess a thick cell wall containing many layers of peptidoglycan and teichoic acids.
Gram-negative bacteria have a relatively thin cell wall consisting of a few layers of peptidoglycan surrounded by a second lipid membrane containing lipopolysaccharides and lipoproteins. Most bacteria have the gram-negative cell wall and only the Bacillota and Actinomycetota (previously known as the low G+C and high G+C gram-positive bacteria, respectively) have the alternative gram-positive arrangement.
These differences in structure produce differences in antibiotic susceptibility. The beta-lactam antibiotics (e.g. penicillin, cephalosporin) only work against gram-negative pathogens, such as Haemophilus influenzae or Pseudomonas aeruginosa. The glycopeptide antibiotics (e.g. vancomycin, teicoplanin, telavancin) only work against gram-positive pathogens such as Staphylococcus aureus
Archaeal cell walls
Although not truly unique, the cell walls of Archaea are unusual. Whereas peptidoglycan is a standard component of all bacterial cell walls, all archaeal cell walls lack peptidoglycan, though some methanogens have a cell wall made of a similar polymer called pseudopeptidoglycan. There are four types of cell wall currently known among the Archaea.
One type of archaeal cell wall is that composed of pseudopeptidoglycan (also called pseudomurein). This type of wall is found in some methanogens, such as Methanobacterium and Methanothermus. While the overall structure of archaeal pseudopeptidoglycan superficially resembles that of bacterial peptidoglycan, there are a number of significant chemical differences. Like the peptidoglycan found in bacterial cell walls, pseudopeptidoglycan consists of polymer chains of glycan cross-linked by short peptide connections. However, unlike peptidoglycan, the sugar N-acetylmuramic acid is replaced by N-acetyltalosaminuronic acid, and the two sugars are bonded with a β,1-3 glycosidic linkage instead of β,1-4. Additionally, the cross-linking peptides are L-amino acids rather than D-amino acids as they are in bacteria.
A second type of archaeal cell wall is found in Methanosarcina and Halococcus. This type of cell wall is composed entirely of a thick layer of polysaccharides, which may be sulfated in the case of Halococcus. Structure in this type of wall is complex and not fully investigated.
A third type of wall among the Archaea consists of glycoprotein, and occurs in the hyperthermophiles, Halobacterium, and some methanogens. In Halobacterium, the proteins in the wall have a high content of acidic amino acids, giving the wall an overall negative charge. The result is an unstable structure that is stabilized by the presence of large quantities of positive sodium ions that neutralize the charge. Consequently, Halobacterium thrives only under conditions with high salinity.
In other Archaea, such as Methanomicrobium and Desulfurococcus, the wall may be composed only of surface-layer proteins, known as an S-layer. S-layers are common in bacteria, where they serve as either the sole cell-wall component or an outer layer in conjunction with polysaccharides. Most Archaea are Gram-negative, though at least one Gram-positive member is known.
Other cell coverings
Many protists and bacteria produce other cell surface structures apart from cell walls, external (extracellular matrix) or internal. Many algae have a sheath or envelope of mucilage outside the cell made of exopolysaccharides. Diatoms build a frustule from silica extracted from the surrounding water; radiolarians, foraminiferans, testate amoebae and silicoflagellates also produce a skeleton from minerals, called test in some groups. Many green algae, such as Halimeda and the Dasycladales, and some red algae, the Corallinales, encase their cells in a secreted skeleton of calcium carbonate. In each case, the wall is rigid and essentially inorganic. It is the non-living component of cell. Some golden algae, ciliates and choanoflagellates produces a shell-like protective outer covering called lorica. Some dinoflagellates have a theca of cellulose plates, and coccolithophorids have coccoliths.
An extracellular matrix (ECM) is also present in metazoans. Its composition varies between cells, but collagens are the most abundant protein in the ECM.
See also
Extracellular matrix
Bacterial cell structure
Plant cell
References
External links
Cell wall ultrastructure
The Cell Wall
Plant physiology
Organelles | Cell wall | [
"Biology"
] | 4,970 | [
"Plant physiology",
"Plants"
] |
6,313 | https://en.wikipedia.org/wiki/Classical%20element | The classical elements typically refer to earth, water, air, fire, and (later) aether which were proposed to explain the nature and complexity of all matter in terms of simpler substances. Ancient cultures in Greece, Angola, Tibet, India, and Mali had similar lists which sometimes referred, in local languages, to "air" as "wind", and to "aether" as "space".
These different cultures and even individual philosophers had widely varying explanations concerning their attributes and how they related to observable phenomena as well as cosmology. Sometimes these theories overlapped with mythology and were personified in deities. Some of these interpretations included atomism (the idea of very small, indivisible portions of matter), but other interpretations considered the elements to be divisible into infinitely small pieces without changing their nature.
While the classification of the material world in ancient India, Hellenistic Egypt, and ancient Greece into air, earth, fire, and water was more philosophical, during the Middle Ages medieval scientists used practical, experimental observation to classify materials. In Europe, the ancient Greek concept, devised by Empedocles, evolved into the systematic classifications of Aristotle and Hippocrates. This evolved slightly into the medieval system, and eventually became the object of experimental verification in the 17th century, at the start of the Scientific Revolution.
Modern science does not support the classical elements to classify types of substances. Atomic theory classifies atoms into more than a hundred chemical elements such as oxygen, iron, and mercury, which may form chemical compounds and mixtures. The modern categories roughly corresponding to the classical elements are the states of matter produced under different temperatures and pressures. Solid, liquid, gas, and plasma share many attributes with the corresponding classical elements of earth, water, air, and fire, but these states describe the similar behavior of different types of atoms at similar energy levels, not the characteristic behavior of certain atoms or substances.
Hellenistic philosophy
The ancient Greek concept of four basic elements, these being earth ( ), water ( ), air ( ), and fire ( ), dates from pre-Socratic times and persisted throughout the Middle Ages and into the Early modern period, deeply influencing European thought and culture.
Pre-Socratic elements
Water, air, or fire?
The classical elements were first proposed independently by several early Pre-Socratic philosophers. Greek philosophers had debated which substance was the arche ("first principle"), or primordial element from which everything else was made. Thales () believed that water was this principle. Anaximander () argued that the primordial substance was not any of the known substances, but could be transformed into them, and they into each other. Anaximenes () favored air, and Heraclitus (fl. ) championed fire.
Fire, earth, air, and water
The Greek philosopher Empedocles () was the first to propose the four classical elements as a set: fire, earth, air, and water. He called them the four "roots" (, ). Empedocles also proved (at least to his own satisfaction) that air was a separate substance by observing that a bucket inverted in water did not become filled with water, a pocket of air remaining trapped inside.
Fire, earth, air, and water have become the most popular set of classical elements in modern interpretations. One such version was provided by Robert Boyle in The Sceptical Chymist, which was published in 1661 in the form of a dialogue between five characters. Themistius, the Aristotelian of the party, says:
Humorism (Hippocrates)
According to Galen, these elements were used by Hippocrates () in describing the human body with an association with the four humours: yellow bile (fire), black bile (earth), blood (air), and phlegm (water). Medical care was primarily about helping the patient stay in or return to their own personal natural balanced state.
Plato
Plato (428/423 – 348/347 BC) seems to have been the first to use the term "element (, )" in reference to air, fire, earth, and water. The ancient Greek word for element, (from , "to line up") meant "smallest division (of a sun-dial), a syllable", as the composing unit of an alphabet it could denote a letter and the smallest unit from which a word is formed.
Aristotle
In On the Heavens (350 BC), Aristotle defines "element" in general:
In his On Generation and Corruption, Aristotle related each of the four elements to two of the four sensible qualities:
Fire is both hot and dry.
Air is both hot and wet (for air is like vapor, ).
Water is both cold and wet.
Earth is both cold and dry.
A classic diagram has one square inscribed in the other, with the corners of one being the classical elements, and the corners of the other being the properties. The opposite corner is the opposite of these properties, "hot – cold" and "dry – wet".
Aether
Aristotle added a fifth element, aether ( ), as the quintessence, reasoning that whereas fire, earth, air, and water were earthly and corruptible, since no changes had been perceived in the heavenly regions, the stars cannot be made out of any of the four elements but must be made of a different, unchangeable, heavenly substance. It had previously been believed by pre-Socratics such as Empedocles and Anaxagoras that aether, the name applied to the material of heavenly bodies, was a form of fire. Aristotle himself did not use the term aether for the fifth element, and strongly criticised the pre-Socratics for associating the term with fire. He preferred a number of other terms indicating eternal movement, thus emphasising the evidence for his discovery of a new element. These five elements have been associated since Plato's Timaeus with the five platonic solids.
Neo-Platonism
The Neoplatonic philosopher Proclus rejected Aristotle's theory relating the elements to the sensible qualities hot, cold, wet, and dry. He maintained that each of the elements has three properties. Fire is sharp (ὀξυτητα), subtle (λεπτομερειαν), and mobile (εὐκινησιαν) while its opposite, earth, is blunt (αμβλυτητα), dense (παχυμερειαν), and immobile (ακινησιαν); they are joined by the intermediate elements, air and water, in the following fashion:
Hermeticism
A text written in Egypt in Hellenistic or Roman times called the Kore Kosmou ("Virgin of the World") ascribed to Hermes Trismegistus (associated with the Egyptian god Thoth), names the four elements fire, water, air, and earth. As described in this book:
Ancient Indian philosophy
Hinduism
The system of five elements are found in Vedas, especially Ayurveda, the pancha mahabhuta, or "five great elements", of Hinduism are:
bhūmi or pṛthvī (earth),
āpas or jala (water),
agní or tejas (fire),
vāyu, vyāna, or vāta (air or wind)
ākāśa, vyom, or śūnya (space or zero) or (aether or void).
They further suggest that all of creation, including the human body, is made of these five essential elements and that upon death, the human body dissolves into these five elements of nature, thereby balancing the cycle of nature.
The five elements are associated with the five senses, and act as the gross medium for the experience of sensations. The basest element, earth, created using all the other elements, can be perceived by all five senses — (i) hearing, (ii) touch, (iii) sight, (iv) taste, and (v) smell. The next higher element, water, has no odor but can be heard, felt, seen and tasted. Next comes fire, which can be heard, felt and seen. Air can be heard and felt. "Akasha" (aether) is beyond the senses of smell, taste, sight, and touch; it being accessible to the sense of hearing alone.
Buddhism
Buddhism has had a variety of thought about the five elements and their existence and relevance, some of which continue to this day.
In the Pali literature, the mahabhuta ("great elements") or catudhatu ("four elements") are earth, water, fire and air. In early Buddhism, the four elements are a basis for understanding suffering and for liberating oneself from suffering. The earliest Buddhist texts explain that the four primary material elements are solidity, fluidity, temperature, and mobility, characterized as earth, water, fire, and air, respectively.
The Buddha's teaching regarding the four elements is to be understood as the base of all observation of real sensations rather than as a philosophy. The four properties are cohesion (water), solidity or inertia (earth), expansion or vibration (air) and heat or energy content (fire). He promulgated a categorization of mind and matter as composed of eight types of "kalapas" of which the four elements are primary and a secondary group of four are colour, smell, taste, and nutriment which are derivative from the four primaries.
Thanissaro Bhikkhu (1997) renders an extract of Shakyamuni Buddha's from Pali into English thus:
Tibetan Buddhist medical literature speaks of the (five elements) or "elemental properties": earth, water, fire, wind, and space. The concept was extensively used in traditional Tibetan medicine. Tibetan Buddhist theology, tantra traditions, and "astrological texts" also spoke of them making up the "environment, [human] bodies," and at the smallest or "subtlest" level of existence, parts of thought and the mind. Also at the subtlest level of existence, the elements exist as "pure natures represented by the five female buddhas", Ākāśadhātviśvarī, Buddhalocanā, Mamakī, Pāṇḍarāvasinī, and Samayatārā, and these pure natures "manifest as the physical properties of earth (solidity), water (fluidity), fire (heat and light), wind (movement and energy), and" the expanse of space. These natures exist as all "qualities" that are in the physical world and take forms in it.
Ancient African philosophy
Angola
In traditional Bakongo religion, the five elements are incorporated into the Kongo cosmogram. This sacred symbol also depicts the physical world (Nseke), the spiritual world of the ancestors (Mpémba), the Kalûnga line that runs between the two worlds, the circular void that originally formed the two worlds (mbûngi), and the path of the sun. Each element correlates to a period in the life cycle, which the Bakongo people also equate to the four cardinal directions. According to their cosmology, all living things go through this cycle.
Aether represents mbûngi, the circular void that begot the universe.
Air (South) represents musoni, the period of conception that takes place during spring.
Fire (East) represent kala, the period of birth that takes place during summer.
Earth (North) represents tukula, the period of maturity that takes place during fall.
Water (West) represents luvemba, the period of death that takes place during winter
Mali
In traditional Bambara spirituality, the Supreme God created four additional essences of himself during creation. Together, these five essences of the deity correlate with the five classical elements.
Koni is the thought and void (aether).
Bemba (also called Pemba) is the god of the sky and air.
Nyale (also called Koroni Koundyé) is the goddess of fire.
Faro is the androgynous god of water.
Ndomadyiri is the god and master of the earth.
Post-classical history
Alchemy
The elemental system used in medieval alchemy was developed primarily by the anonymous authors of the Arabic works attributed to Pseudo Apollonius of Tyana. This system consisted of the four classical elements of air, earth, fire, and water, in addition to a new theory called the sulphur-mercury theory of metals, which was based on two elements: sulphur, characterizing the principle of combustibility, "the stone which burns"; and mercury, characterizing the principle of metallic properties. They were seen by early alchemists as idealized expressions of irreducible components of the universe and are of larger consideration within philosophical alchemy.
The three metallic principles—sulphur to flammability or combustion, mercury to volatility and stability, and salt to solidity—became the tria prima of the Swiss alchemist Paracelsus. He reasoned that Aristotle's four element theory appeared in bodies as three principles. Paracelsus saw these principles as fundamental and justified them by recourse to the description of how wood burns in fire. Mercury included the cohesive principle, so that when it left in smoke the wood fell apart. Smoke described the volatility (the mercurial principle), the heat-giving flames described flammability (sulphur), and the remnant ash described solidity (salt).
Japan
Japanese traditions use a set of elements called the (godai, literally "five great"). These five are earth, water, fire, wind/air, and void. These came from Indian Vastu shastra philosophy and Buddhist beliefs; in addition, the classical Chinese elements (, wu xing) are also prominent in Japanese culture, especially to the influential Neo-Confucianists during the medieval Edo period.
Earth represented rocks and stability.
Water represented fluidity and adaptability.
Fire represented life and energy.
Wind represented movement and expansion.
Void or Sky/Heaven represented spirit and creative energy.
Medieval Aristotelian philosophy
The Islamic philosophers al-Kindi, Avicenna and Fakhr al-Din al-Razi followed Aristotle in connecting the four elements with the four natures heat and cold (the active force), and dryness and moisture (the recipients).
Medicine Wheel
The medicine wheel symbol is a modern invention attributed to Native American peoples dating to approximately 1972, with the following descriptions and associations being a later addition. The associations with the classical elements are not grounded in traditional Indigenous teachings and the symbol has not been adopted by all Indigenous American nations.
Earth (South) represents the youth cycle, summer, the Indigenous race, and cedar medicine.
Fire (East) represents the birth cycle, spring, the Asian race, and tobacco medicine.
Wind/Air (North) represents the elder cycle, winter, the European race, and sweetgrass medicine.
Water (West) represents the adulthood cycle, autumn, the African race, and sage medicine.
Modern history
Chemical element
The Aristotelian tradition and medieval alchemy eventually gave rise to modern chemistry, scientific theories and new taxonomies. By the time of Antoine Lavoisier, for example, a list of elements would no longer refer to classical elements. Some modern scientists see a parallel between the classical elements and the four states of matter: solid, liquid, gas and weakly ionized plasma.
Modern science recognizes classes of elementary particles which have no substructure (or rather, particles that are not made of other particles) and composite particles having substructure (particles made of other particles).
Western astrology
Western astrology uses the four classical elements in connection with astrological charts and horoscopes. The twelve signs of the zodiac are divided into the four elements: Fire signs are Aries, Leo and Sagittarius, Earth signs are Taurus, Virgo and Capricorn, Air signs are Gemini, Libra and Aquarius, and Water signs are Cancer, Scorpio, and Pisces.
Criticism
The Dutch historian of science Eduard Jan Dijksterhuis writes that the theory of the classical elements "was bound to exercise a really harmful influence. As is now clear, Aristotle, by adopting this theory as the basis of his interpretation of nature and by never losing faith in it, took a course which promised few opportunities and many dangers for science." Bertrand Russell says that Aristotle's thinking became imbued with almost biblical authority in later centuries. So much so that "Ever since the beginning of the seventeenth century, almost every serious intellectual advance has had to begin with an attack on some Aristotelian doctrine".
See also
– Early Islamic alchemy
Notes
References
Bibliography
External links
Section on 4 elements in Buddhism
Natural philosophy
History of astrology
Technical factors of astrology
Concepts in Chinese philosophy
Theories in ancient Greek philosophy
Indian philosophy
Hindu cosmology
Buddhist cosmology
Taoist cosmology
Esoteric cosmology | Classical element | [
"Astronomy"
] | 3,543 | [
"History of astrology",
"History of astronomy"
] |
6,314 | https://en.wikipedia.org/wiki/Fire%20%28classical%20element%29 | Fire is one of the four classical elements along with earth, water and air in ancient Greek philosophy and science. Fire is considered to be both hot and dry and, according to Plato, is associated with the tetrahedron.
Greek and Roman tradition
Fire is one of the four classical elements in ancient Greek philosophy and science. It was commonly associated with the qualities of energy, assertiveness, and passion. In one Greek myth, Prometheus stole fire from the gods to protect the otherwise helpless humans, but was punished for this charity.
Fire was one of many archai proposed by the pre-Socratics, most of whom sought to reduce the cosmos, or its creation, to a single substance. Heraclitus considered fire to be the most fundamental of all elements. He believed fire gave rise to the other three elements: "All things are an interchange for fire, and fire for all things, just like goods for gold and gold for goods." He had a reputation for obscure philosophical principles and for speaking in riddles. He described how fire gave rise to the other elements as the: "upward-downward path", (), a "hidden harmony" or series of transformations he called the "turnings of fire", (), first into sea, and half that sea into earth, and half that earth into rarefied air. This is a concept that anticipates both the four classical elements of Empedocles and Aristotle's transmutation of the four elements into one another.
This world, which is the same for all, no one of gods or men has made. But it always was and will be: an ever-living fire, with measures of it kindling, and measures going out.
Heraclitus regarded the soul as being a mixture of fire and water, with fire being the more noble part and water the ignoble aspect. He believed the goal of the soul is to be rid of water and become pure fire: the dry soul is the best and it is worldly pleasures that make the soul "moist". He was known as the "weeping philosopher" and died of hydropsy, a swelling due to abnormal accumulation of fluid beneath the skin.
However, Empedocles of Akragas , is best known for having selected all elements as his archai and by the time of Plato , the four Empedoclian elements were well established. In the Timaeus, Plato's major cosmological dialogue, the Platonic solid he associated with fire was the tetrahedron which is formed from four triangles and contains the least volume with the greatest surface area. This also makes fire the element with the smallest number of sides, and Plato regarded it as appropriate for the heat of fire, which he felt is sharp and stabbing, (like one of the points of a tetrahedron).
Plato's student Aristotle did not maintain his former teacher's geometric view of the elements, but rather preferred a somewhat more naturalistic explanation for the elements based on their traditional qualities. Fire the hot and dry element, like the other elements, was an abstract principle and not identical with the normal solids, liquids and combustion phenomena we experience:
What we commonly call fire. It is not really fire, for fire is an excess of heat and a sort of ebullition; but in reality, of what we call air, the part surrounding the earth is moist and warm, because it contains both vapour and a dry exhalation from the earth.
According to Aristotle, the four elements rise or fall toward their natural place in concentric layers surrounding the center of the Earth and form the terrestrial or sublunary spheres.
In ancient Greek medicine, each of the four humours became associated with an element. Yellow bile was the humor identified with fire, since both were hot and dry. Other things associated with fire and yellow bile in ancient and medieval medicine included the season of summer, since it increased the qualities of heat and aridity; the choleric temperament (of a person dominated by the yellow bile humour); the masculine; and the eastern point of the compass.
In alchemy the chemical element of sulfur was often associated with fire and its alchemical symbol and its symbol was an upward-pointing triangle. In alchemic tradition, metals are incubated by fire in the womb of the Earth and alchemists only accelerate their development.
Indian tradition
Agni is a Hindu and Vedic deity. The word agni is Sanskrit for fire (noun), cognate with Latin ignis (the root of English ignite), Russian огонь (fire), pronounced agon. Agni has three forms: fire, lightning and the sun.
Agni is one of the most important of the Vedic gods. He is the god of fire and the accepter of sacrifices. The sacrifices made to Agni go to the deities because Agni is a messenger from and to the other gods. He is ever-young, because the fire is re-lit every day, yet he is also immortal. In Indian tradition fire is also linked to Surya or the Sun and Mangala or Mars, and with the south-east direction.
Teukāya ekendriya is a name used in Jain tradition which refers to Jīvas said to be reincarnated as fire.
Ceremonial magic
Fire and the other Greek classical elements were incorporated into the Golden Dawn system. Philosophus (4=7) is the elemental grade attributed to fire; this grade is also attributed to the Qabalistic Sephirah Netzach and the planet Venus. The elemental weapon of fire is the Wand. Each of the elements has several associated spiritual beings. The archangel of fire is Michael, the angel is Aral, the ruler is Seraph, the king is Djin, and the fire elementals (following Paracelsus) are called salamanders. Fire is considered to be active; it is represented by the symbol for Leo and it is referred to the lower right point of the pentacle in the Supreme Invoking Ritual of the Pentacle. Many of these associations have since spread throughout the occult community.
Tarot
Fire in tarot symbolizes conversion or passion. Many references to fire in tarot are related to the usage of fire in the practice of alchemy, in which the application of fire is a prime method of conversion, and everything that touches fire is changed, often beyond recognition. The symbol of fire was a cue pointing towards transformation, the chemical variant being the symbol delta, which is also the classical symbol for fire. Conversion symbolized can be good, for example, refining raw crudities to gold, as seen in The Devil. Conversion can also be bad, as in The Tower, symbolizing a downfall due to anger. Fire is associated with the suit of rods/wands, and as such, represents passion from inspiration. As an element, fire has mixed symbolism because it represents energy, which can be helpful when controlled, but volatile if left unchecked.
Modern witchcraft
Fire is one of the five elements that appear in most Wiccan traditions influenced by the Golden Dawn system of magic, and Aleister Crowley's mysticism, which was in turn inspired by the Golden Dawn.
Freemasonry
In freemasonry, fire is present, for example, during the ceremony of winter solstice, a symbol also of renaissance and energy. Freemasonry takes the ancient symbolic meaning of fire and recognizes its double nature: creation, light, on the one hand, and destruction and purification, on the other.
See also
Fire
Fire god
Fire worship
Pyrokinesis
Pyromancy
Pyromania
References
Further reading
Frazer, Sir James George, Myths of the Origin of Fire, London: Macmillan, 1930.
External links
Different versions of the classical elements
Overview the 5 elements
Section on 4 elements in Buddhism
a virtual exhibition about the history of fire
Classical elements
Esoteric cosmology
Fire in culture
Technical factors of astrology
History of astrology
Concepts in ancient Greek metaphysics | Fire (classical element) | [
"Astronomy"
] | 1,644 | [
"History of astrology",
"History of astronomy"
] |
6,315 | https://en.wikipedia.org/wiki/Air%20%28classical%20element%29 | Air or Wind is one of the four classical elements along with water, earth and fire in ancient Greek philosophy and in Western alchemy.
Greek and Roman tradition
According to Plato, it is associated with the octahedron; air is considered to be both hot and wet. The ancient Greeks used two words for air: aer meant the dim lower atmosphere, and aether meant the bright upper atmosphere above the clouds. Plato, for instance writes that "So it is with air: there is the brightest variety which we call aether, the muddiest which we call mist and darkness, and other kinds for which we have no name...." Among the early Greek Pre-Socratic philosophers, Anaximenes (mid-6th century BCE) named air as the arche. A similar belief was attributed by some ancient sources to Diogenes Apolloniates (late 5th century BCE), who also linked air with intelligence and soul (psyche), but other sources claim that his arche was a substance between air and fire. Aristophanes parodied such teachings in his play The Clouds by putting a prayer to air in the mouth of Socrates.
Air was one of many archai proposed by the Pre-socratics, most of whom tried to reduce all things to a single substance. However, Empedocles of Acragas (c. 495-c. 435 BCE) selected four archai for his four roots: air, fire, water, and earth. Ancient and modern opinions differ as to whether he identified air by the divine name Hera, Aidoneus or even Zeus. Empedocles’ roots became the four classical elements of Greek philosophy. Plato (427–347 BCE) took over the four elements of Empedocles. In the Timaeus, his major cosmological dialogue, the Platonic solid associated with air is the octahedron which is formed from eight equilateral triangles. This places air between fire and water which Plato regarded as appropriate because it is intermediate in its mobility, sharpness, and ability to penetrate. He also said of air that its minuscule components are so smooth that one can barely feel them.
Plato's student Aristotle (384–322 BCE) developed a different explanation for the elements based on pairs of qualities. The four elements were arranged concentrically around the center of the universe to form the sublunary sphere. According to Aristotle, air is both hot and wet and occupies a place between fire and water among the elemental spheres. Aristotle definitively separated air from aether. For him, aether was an unchanging, almost divine substance that was found only in the heavens, where it formed celestial spheres.
Humorism and temperaments
In ancient Greek medicine, each of the four humours became associated with an element. Blood was the humor identified with air, since both were hot and wet. Other things associated with air and blood in ancient and medieval medicine included the season of spring, since it increased the qualities of heat and moisture; the sanguine temperament (of a person dominated by the blood humour); hermaphrodite (combining the masculine quality of heat with the feminine quality of moisture); and the northern point of the compass.
Alchemy
The alchemical symbol for air is an upward-pointing triangle, bisected by a horizontal line.
Modern reception
The Hermetic Order of the Golden Dawn, founded in 1888, incorporates air and the other Greek classical elements into its teachings. The elemental weapon of air is the dagger which must be painted yellow with magical names and sigils written upon it in violet. Each of the elements has several associated spiritual beings. The archangel of air is Raphael, the angel is Chassan, the ruler is Ariel, the king is Paralda, and the air elementals (following Paracelsus) are called sylphs. Air is considerable and it is referred to the upper left point of the pentagram in the Supreme Invoking Ritual of the Pentagram. Many of these associations have since spread throughout the occult community.
In the Golden Dawn and many other magical systems, each element is associated with one of the cardinal points and is placed under the care of guardian Watchtowers. The Watchtowers derive from the Enochian system of magic founded by Dee. In the Golden Dawn, they are represented by the Enochian elemental tablets. Air is associated with the east, which is guarded by the First Watchtower.
Air is one of the five elements that appear in most Wiccan and Pagan traditions. Wicca in particular was influenced by the Golden Dawn system of magic and Aleister Crowley's mysticism.
Parallels in non-Western traditions
Air is not one of the traditional five Chinese classical elements. Nevertheless, the ancient Chinese concept of Qi or chi is believed to be close to that of air. Qi is believed to be part of every living thing that exists, as a kind of "life force" or "spiritual energy". It is frequently translated as "energy flow", or literally as "air" or "breath". (For example, tiānqì, literally "sky breath", is the Chinese word for "weather"). The concept of qi is often reified, however no scientific evidence supports its existence.
The element air also appears as a concept in the Buddhist philosophy which has an ancient history in China.
Some Western modern occultists equate the Chinese classical element of metal with air, others with wood due to the elemental association of wind and wood in the bagua.
Enlil was the god of air in ancient Sumer. Shu was the ancient Egyptian deity of air and the husband of Tefnut, goddess of moisture. He became an emblem of strength by virtue of his role in separating Nut from Geb. Shu played a primary role in the Coffin Texts, which were spells intended to help the deceased reach the realm of the afterlife safely. On the way to the sky, the spirit had to travel through the air as one spell indicates: "I have gone up in Shu, I have climbed on the sunbeams."
According to Jain beliefs, the element air is inhabited by one-sensed beings or spirits called vāyukāya ekendriya, sometimes said to inhabit various kinds of winds such as whirlwinds, cyclones, monsoons, west winds and trade winds. Prior to reincarnating into another lifeform, spirits can remain as vāyukāya ekendriya from anywhere between one instant to up to three-thousand years, depending on the karma of the spirits.
See also
Atmosphere of Earth
Sky deity
Wind deity
Notes
References
Barnes, Jonathan. Early Greek Philosophy. London: Penguin, 1987.
Brier, Bob. Ancient Egyptian Magic. New York: Quill, 1980.
Guthrie, W. K. C. A History of Greek Philosophy. 6 volumes. Cambridge: Cambridge University Press, 1962–81.
Hutton, Ronald. Triumph of the Moon: A History of Modern Pagan Witchcraft. Oxford: Oxford University Press, 1999, 2001.
Kraig, Donald Michael. Modern Magick: Eleven Lessons in the High Magickal Arts. St. Paul: Llewellyn, 1994.
Lloyd, G. E. R. Aristotle: The Growth and Structure of His Thought. Cambridge: Cambridge University Press, 1968.
Plato. Timaeus and Critias. Translated by Desmond Lee. Revised edition. London: Penguin, 1977.
Regardie, Israel. The Golden Dawn. 6th edition. St. Paul: Llewellyn, 1990.
Schiebinger, Londa. The Mind Has No Sex? Women in the Origins of Modern Science. Cambridge: Harvard University Press, 1989.
Valiente, Doreen. Witchcraft for Tomorrow. Custer, Wash.: Phoenix Publishing, 1978.
Valiente, Doreen. The Rebirth of Witchcraft. Custer, Wash.: Phoenix Publishing, 1989.
Vlastos, Gregory. Plato’s Universe. Seattle: University of Washington Press, 1975.
Further reading
Cunningham, Scott. Earth, Air, Fire and Water: More Techniques of Natural Magic.
Starhawk. The Spiral Dance: A Rebirth of the Ancient Religion of the Great Goddess. 3rd edition. 1999.
External links
Atmosphere of Earth
Classical elements
Esoteric cosmology
History of astrology
Technical factors of astrology
Gases
Concepts in ancient Greek metaphysics | Air (classical element) | [
"Physics",
"Chemistry",
"Astronomy"
] | 1,709 | [
"Matter",
"History of astronomy",
"Phases of matter",
"Statistical mechanics",
"Gases",
"History of astrology"
] |
6,316 | https://en.wikipedia.org/wiki/Water%20%28classical%20element%29 | Water is one of the classical elements in ancient Greek philosophy along with air, earth and fire, in the Asian Indian system Panchamahabhuta, and in the Chinese cosmological and physiological system Wu Xing. In contemporary esoteric traditions, it is commonly associated with the qualities of emotion and intuition.
Greek and Roman tradition
Water was one of many archai proposed by the Pre-socratics, most of whom tried to reduce all things to a single substance. However, Empedocles of Acragas (c. 495 – c. 435 BC) selected four archai for his four roots: air, fire, water and earth. Empedocles roots became the four classical elements of Greek philosophy. Plato (427–347 BC) took over the four elements of Empedocles. In the Timaeus, his major cosmological dialogue, the Platonic solid associated with water is the icosahedron which is formed from twenty equilateral triangles. This makes water the element with the greatest number of sides, which Plato regarded as appropriate because water flows out of one's hand when picked up, as if it is made of tiny little balls.
Plato's student Aristotle (384–322 BC) developed a different explanation for the elements based on pairs of qualities. The four elements were arranged concentrically around the center of the Universe to form the sublunary sphere. According to Aristotle, water is both cold and wet and occupies a place between air and earth among the elemental spheres.
In ancient Greek medicine, each of the four humours became associated with an element. Phlegm was the humor identified with water, since both were cold and wet. Other things associated with water and phlegm in ancient and medieval medicine included the season of Winter, since it increased the qualities of cold and moisture, the phlegmatic temperament, the feminine and the western point of the compass.
In alchemy, the chemical element of mercury was often associated with water and its alchemical symbol was a downward-pointing triangle.
Indian tradition
Ap () is the Vedic Sanskrit term for water, in Classical Sanskrit occurring only in the plural is not an element.v, (sometimes re-analysed as a thematic singular, ), whence Hindi . The term is from PIE hxap water.
In Hindu philosophy, the term refers to
water as an element, one of the Panchamahabhuta, or "five great elements". In Hinduism, it is also the name of the deva, a personification of water, (one of the Vasus in most later Puranic lists). The element water is also associated with Chandra or the moon and Shukra, who represent feelings, intuition and imagination.
According to Jain tradition, water itself is inhabited by spiritual Jīvas called apakāya ekendriya.
Ceremonial magic
Water and the other Greek classical elements were incorporated into the Golden Dawn system. The elemental weapon of water is the cup. Each of the elements has several associated spiritual beings. The archangel of water is Gabriel, the angel is Taliahad, the ruler is Tharsis, the king is Nichsa and the water elementals are called Ondines. It is referred to the upper right point of the pentagram in the Supreme Invoking Ritual of the Pentagram. Many of these associations have since spread throughout the occult community.
Modern witchcraft
Water is one of the five elements that appear in most Wiccan traditions. Wicca in particular was influenced by the Golden Dawn system of magic and Aleister Crowley's mysticism, which was in turn inspired by the Golden Dawn.
See also
Water
Sea and river deity
Notes
External links
Different versions of the classical elements
Classical elements
Water in culture
Esoteric cosmology
History of astrology
Technical factors of astrology
Concepts in ancient Greek metaphysics | Water (classical element) | [
"Astronomy"
] | 787 | [
"History of astrology",
"History of astronomy"
] |
6,317 | https://en.wikipedia.org/wiki/Earth%20%28classical%20element%29 | Earth is one of the classical elements, in some systems being one of the four along with air, fire, and water.
European tradition
Earth is one of the four classical elements in ancient Greek philosophy and science. It was commonly associated with qualities of heaviness, matter and the terrestrial world. Due to the hero cults, and chthonic underworld deities, the element of earth is also associated with the sensual aspects of both life and death in later occultism.
Empedocles of Acragas proposed four archai by which to understand the cosmos: fire, air, water, and earth. Plato (427–347 BCE) believed the elements were geometric forms (the platonic solids) and he assigned the cube to the element of earth in his dialogue Timaeus. Aristotle (384–322 BCE) believed earth was the heaviest element, and his theory of natural place suggested that any earth–laden substances, would fall quickly, straight down, towards the center of the cosmos.
In Classical Greek and Roman myth, various goddesses
represented the Earth, seasons, crops and fertility, including Demeter and Persephone; Ceres; the Horae (goddesses of the seasons), and Proserpina; and Hades (Pluto) who ruled the souls of dead in the Underworld.
In ancient Greek medicine, each of the four humours became associated with an element. Black bile was the humor identified with earth, since both were cold and dry. Other things associated with earth and black bile in ancient and medieval medicine included the season of fall, since it increased the qualities of cold and aridity; the melancholic temperament (of a person dominated by the black bile humour); the feminine; and the southern point of the compass.
In alchemy, earth was believed to be primarily dry, and secondarily cold, (as per Aristotle). Beyond those classical attributes, the chemical substance salt, was associated with earth and its alchemical symbol was a downward-pointing triangle, bisected by a horizontal line.
Indian tradition
Prithvi (Sanskrit: , also ) is the Hindu earth and mother goddess. According to one such tradition, she is the personification of the Earth itself; according to another, its actual mother, being Prithvi Tattwa, the essence of the element earth.
As Prithvi Mata, or "Mother Earth", she contrasts with Dyaus Pita, "father sky". In the Rigveda, earth and sky are frequently addressed as a duality, often indicated by the idea of two complementary "half-shells." In addition, the element Earth is associated with Budha or Mercury who represents communication, business, mathematics and other practical matters.
Jainism mentions one-sensed beings or spirits believed to inhabit the element earth sometimes classified as pṛthvīkāya ekendriya.
Ceremonial magic
Earth and the other Greek classical elements were incorporated into the Golden Dawn system. Zelator is the elemental grade attributed to earth; this grade is also attributed to the Sephirot of Malkuth. The elemental weapon of earth is the Pentacle. Each of the elements has several associated spiritual beings. The archangel of earth is Uriel, the angel is Phorlakh, the ruler is Kerub, the king is Ghob, and the earth elementals (following Paracelsus) are called gnomes. Earth is considered to be passive; it is represented by the symbol for Taurus, and it is referred to the lower left point of the pentagram in the Supreme Invoking Ritual of the Pentagram. Many of these associations have since spread throughout the occult community.
It is sometimes represented by its Tattva or by a downward pointing triangle with a horizontal line through it.
Modern witchcraft
Earth is one of the five elements that appear in most Wiccan and Pagan traditions. Wicca in particular was influenced by the Golden Dawn system of magic, and Aleister Crowley's mysticism which was in turn inspired by the Golden Dawn.
Other traditions
Earth is represented in the Aztec religion by a house; to the Hindus, a lotus; to the Scythians, a plough; to the Greeks, a wheel; and in Christian iconography; bulls and birds.
See also
Earth
Gaia (mythology)
Mother goddess
Mother nature
Pherecydes of Syros
Notes
External links
Different versions of the classical elements
Earth in religion
Classical elements
Esoteric cosmology
History of astrology
Technical factors of astrology
Concepts in ancient Greek metaphysics | Earth (classical element) | [
"Astronomy"
] | 922 | [
"History of astrology",
"History of astronomy"
] |
6,329 | https://en.wikipedia.org/wiki/Chromatography | In chemical analysis, chromatography is a laboratory technique for the separation of a mixture into its components. The mixture is dissolved in a fluid solvent (gas or liquid) called the mobile phase, which carries it through a system (a column, a capillary tube, a plate, or a sheet) on which a material called the stationary phase is fixed. Because the different constituents of the mixture tend to have different affinities for the stationary phase and are retained for different lengths of time depending on their interactions with its surface sites, the constituents travel at different apparent velocities in the mobile fluid, causing them to separate. The separation is based on the differential partitioning between the mobile and the stationary phases. Subtle differences in a compound's partition coefficient result in differential retention on the stationary phase and thus affect the separation.
Chromatography may be preparative or analytical. The purpose of preparative chromatography is to separate the components of a mixture for later use, and is thus a form of purification. This process is associated with higher costs due to its mode of production. Analytical chromatography is done normally with smaller amounts of material and is for establishing the presence or measuring the relative proportions of analytes in a mixture. The two types are not mutually exclusive.
Etymology and pronunciation
Chromatography, pronounced , is derived from Greek χρῶμα chrōma, which means "color", and γράφειν gráphein, which means "to write". The combination of these two terms was directly inherited from the invention of the technique first used to separate biological pigments.
History
The method was developed by botanist Mikhail Tsvet in 1901–1905 in universities of Kazan and Warsaw. He developed the technique and coined the term chromatography in the first decade of the 20th century, primarily for the separation of plant pigments such as chlorophyll, carotenes, and xanthophylls. Since these components separate in bands of different colors (green, orange, and yellow, respectively) they directly inspired the name of the technique. New types of chromatography developed during the 1930s and 1940s made the technique useful for many separation processes.
Chromatography technique developed substantially as a result of the work of Archer John Porter Martin and Richard Laurence Millington Synge during the 1940s and 1950s, for which they won the 1952 Nobel Prize in Chemistry. They established the principles and basic techniques of partition chromatography, and their work encouraged the rapid development of several chromatographic methods: paper chromatography, gas chromatography, and what would become known as high-performance liquid chromatography. Since then, the technology has advanced rapidly. Researchers found that the main principles of Tsvet's chromatography could be applied in many different ways, resulting in the different varieties of chromatography described below. Advances are continually improving the technical performance of chromatography, allowing the separation of increasingly similar molecules.
Terms
Analyte – the substance to be separated during chromatography. It is also normally what is needed from the mixture.
Analytical chromatography – the use of chromatography to determine the existence and possibly also the concentration of analyte(s) in a sample.
Bonded phase – a stationary phase that is covalently bonded to the support particles or to the inside wall of the column tubing.
Chromatogram – the visual output of the chromatograph. In the case of an optimal separation, different peaks or patterns on the chromatogram correspond to different components of the separated mixture. Plotted on the x-axis is the retention time and plotted on the y-axis a signal (for example obtained by a spectrophotometer, mass spectrometer or a variety of other detectors) corresponding to the response created by the analytes exiting the system. In the case of an optimal system the signal is proportional to the concentration of the specific analyte separated.
Chromatograph – an instrument that enables a sophisticated separation, e.g. gas chromatographic or liquid chromatographic separation.
Chromatography – a physical method of separation that distributes components to separate between two phases, one stationary (stationary phase), the other (the mobile phase) moving in a definite direction.
Eluent (sometimes spelled eluant) – the solvent or solvent fixure used in elution chromatography and is synonymous with mobile phase.
Eluate – the mixture of solute (see Eluite) and solvent (see Eluent) exiting the column.
Effluent – the stream flowing out of a chromatographic column. In practise, it is used synonymously with eluate, but the term more precisely refers to the stream independent of separation taking place.
Eluite – a more precise term for solute or analyte. It is a sample component leaving the chromatographic column.
Eluotropic series – a list of solvents ranked according to their eluting power.
Immobilized phase – a stationary phase that is immobilized on the support particles, or on the inner wall of the column tubing.
Mobile phase – the phase that moves in a definite direction. It may be a liquid (LC and capillary electrochromatography, CEC), a gas (GC), or a supercritical fluid (supercritical-fluid chromatography, SFC). The mobile phase consists of the sample being separated/analyzed and the solvent that moves the sample through the column. In the case of HPLC the mobile phase consists of a non-polar solvent(s) such as hexane in normal phase or a polar solvent such as methanol in reverse phase chromatography and the sample being separated. The mobile phase moves through the chromatography column (the stationary phase) where the sample interacts with the stationary phase and is separated.
Preparative chromatography – the use of chromatography to purify sufficient quantities of a substance for further use, rather than analysis.
Retention time – the characteristic time it takes for a particular analyte to pass through the system (from the column inlet to the detector) under set conditions. See also: Kovats' retention index
Sample – the matter analyzed in chromatography. It may consist of a single component or it may be a mixture of components. When the sample is treated in the course of an analysis, the phase or the phases containing the analytes of interest is/are referred to as the sample whereas everything out of interest separated from the sample before or in the course of the analysis is referred to as waste.
Solute – the sample components in partition chromatography.
Solvent – any substance capable of solubilizing another substance, and especially the liquid mobile phase in liquid chromatography.
Stationary phase – the substance fixed in place for the chromatography procedure. Examples include the silica layer in thin-layer chromatography
Detector – the instrument used for qualitative and quantitative detection of analytes after separation.
Chromatography is based on the concept of partition coefficient. Any solute partitions between two immiscible solvents. When one make one solvent immobile (by adsorption on a solid support matrix) and another mobile it results in most common applications of chromatography. If the matrix support, or stationary phase, is polar (e.g., cellulose, silica etc.) it is forward phase chromatography. Otherwise this technique is known as reversed phase, where a non-polar stationary phase (e.g., non-polar derivative of C-18) is used.
Techniques by chromatographic bed shape
Column chromatography
Column chromatography is a separation technique in which the stationary bed is within a tube. The particles of the solid stationary phase or the support coated with a liquid stationary phase may fill the whole inside volume of the tube (packed column) or be concentrated on or along the inside tube wall leaving an open, unrestricted path for the mobile phase in the middle part of the tube (open tubular column). Differences in rates of movement through the medium are calculated to different retention times of the sample.
In 1978, W. Clark Still introduced a modified version of column chromatography called flash column chromatography (flash). The technique is very similar to the traditional column chromatography, except that the solvent is driven through the column by applying positive pressure. This allowed most separations to be performed in less than 20 minutes, with improved separations compared to the old method. Modern flash chromatography systems are sold as pre-packed plastic cartridges, and the solvent is pumped through the cartridge. Systems may also be linked with detectors and fraction collectors providing automation. The introduction of gradient pumps resulted in quicker separations and less solvent usage.
In expanded bed adsorption, a fluidized bed is used, rather than a solid phase made by a packed bed. This allows omission of initial clearing steps such as centrifugation and filtration, for culture broths or slurries of broken cells.
Phosphocellulose chromatography utilizes the binding affinity of many DNA-binding proteins for phosphocellulose. The stronger a protein's interaction with DNA, the higher the salt concentration needed to elute that protein.
Planar chromatography
Planar chromatography is a separation technique in which the stationary phase is present as or on a plane. The plane can be a paper, serving as such or impregnated by a substance as the stationary bed (paper chromatography) or a layer of solid particles spread on a support such as a glass plate (thin-layer chromatography). Different compounds in the sample mixture travel different distances according to how strongly they interact with the stationary phase as compared to the mobile phase. The specific Retention factor (Rf) of each chemical can be used to aid in the identification of an unknown substance.
Paper chromatography
Paper chromatography is a technique that involves placing a small dot or line of sample solution onto a strip of chromatography paper. The paper is placed in a container with a shallow layer of solvent and sealed. As the solvent rises through the paper, it meets the sample mixture, which starts to travel up the paper with the solvent. This paper is made of cellulose, a polar substance, and the compounds within the mixture travel further if they are less polar. More polar substances bond with the cellulose paper more quickly, and therefore do not travel as far.
Thin-layer chromatography (TLC)
Thin-layer chromatography (TLC) is a widely employed laboratory technique used to separate different biochemicals on the basis of their relative attractions to the stationary and mobile phases. It is similar to paper chromatography. However, instead of using a stationary phase of paper, it involves a stationary phase of a thin layer of adsorbent like silica gel, alumina, or cellulose on a flat, inert substrate. TLC is very versatile; multiple samples can be separated simultaneously on the same layer, making it very useful for screening applications such as testing drug levels and water purity.
Possibility of cross-contamination is low since each separation is performed on a new layer. Compared to paper, it has the advantage of faster runs, better separations, better quantitative analysis, and the choice between different adsorbents. For even better resolution and faster separation that utilizes less solvent, high-performance TLC can be used. An older popular use had been to differentiate chromosomes by observing distance in gel (separation of was a separate step).
Displacement chromatography
The basic principle of displacement chromatography is:
A molecule with a high affinity for the chromatography matrix (the displacer) competes effectively for binding sites, and thus displaces all molecules with lesser affinities.
There are distinct differences between displacement and elution chromatography. In elution mode, substances typically emerge from a column in narrow, Gaussian peaks. Wide separation of peaks, preferably to baseline, is desired for maximum purification. The speed at which any component of a mixture travels down the column in elution mode depends on many factors. But for two substances to travel at different speeds, and thereby be resolved, there must be substantial differences in some interaction between the biomolecules and the chromatography matrix. Operating parameters are adjusted to maximize the effect of this difference. In many cases, baseline separation of the peaks can be achieved only with gradient elution and low column loadings. Thus, two drawbacks to elution mode chromatography, especially at the preparative scale, are operational complexity, due to gradient solvent pumping, and low throughput, due to low column loadings. Displacement chromatography has advantages over elution chromatography in that components are resolved into consecutive zones of pure substances rather than "peaks". Because the process takes advantage of the nonlinearity of the isotherms, a larger column feed can be separated on a given column with the purified components recovered at significantly higher concentrations.
Techniques by physical state of mobile phase
Gas chromatography
Gas chromatography (GC), also sometimes known as gas-liquid chromatography, (GLC), is a separation technique in which the mobile phase is a gas. Gas chromatographic separation is always carried out in a column, which is typically "packed" or "capillary". Packed columns are the routine workhorses of gas chromatography, being cheaper and easier to use and often giving adequate performance. Capillary columns generally give far superior resolution and although more expensive are becoming widely used, especially for complex mixtures. Further, capillary columns can be split into three classes: porous layer open tubular (PLOT), wall-coated open tubular (WCOT) and support-coated open tubular (SCOT) columns. PLOT columns are unique in a way that the stationary phase is adsorbed to the column walls, while WCOT columns have a stationary phase that is chemically bonded to the walls. SCOT columns are in a way the combination of the two types mentioned in a way that they have support particles adhered to column walls, but those particles have liquid phase chemically bonded onto them. Both types of column are made from non-adsorbent and chemically inert materials. Stainless steel and glass are the usual materials for packed columns and quartz or fused silica for capillary columns.
Gas chromatography is based on a partition equilibrium of analyte between a solid or viscous liquid stationary phase (often a liquid silicone-based material) and a mobile gas (most often helium). The stationary phase is adhered to the inside of a small-diameter (commonly 0.53 – 0.18mm inside diameter) glass or fused-silica tube (a capillary column) or a solid matrix inside a larger metal tube (a packed column). It is widely used in analytical chemistry; though the high temperatures used in GC make it unsuitable for high molecular weight biopolymers or proteins (heat denatures them), frequently encountered in biochemistry, it is well suited for use in the petrochemical, environmental monitoring and remediation, and industrial chemical fields. It is also used extensively in chemistry research.
Liquid chromatography
Liquid chromatography (LC) is a separation technique in which the mobile phase is a liquid. It can be carried out either in a column or a plane. Present day liquid chromatography that generally utilizes very small packing particles and a relatively high pressure is referred to as high-performance liquid chromatography.
In HPLC the sample is forced by a liquid at high pressure (the mobile phase) through a column that is packed with a stationary phase composed of irregularly or spherically shaped particles, a porous monolithic layer, or a porous membrane. Monoliths are "sponge-like chromatographic media" and are made up of an unending block of organic or inorganic parts. HPLC is historically divided into two different sub-classes based on the polarity of the mobile and stationary phases. Methods in which the stationary phase is more polar than the mobile phase (e.g., toluene as the mobile phase, silica as the stationary phase) are termed normal phase liquid chromatography (NPLC) and the opposite (e.g., water-methanol mixture as the mobile phase and C18 () as the stationary phase) is termed reversed phase liquid chromatography (RPLC).
Supercritical fluid chromatography
Supercritical fluid chromatography is a separation technique in which the mobile phase is a fluid above and relatively close to its critical temperature and pressure.
Techniques by separation mechanism
Affinity chromatography
Affinity chromatography is based on selective non-covalent interaction between an analyte and specific molecules. It is very specific, but not very robust. It is often used in biochemistry in the purification of proteins bound to tags. These fusion proteins are labeled with compounds such as His-tags, biotin or antigens, which bind to the stationary phase specifically. After purification, these tags are usually removed and the pure protein is obtained.
Affinity chromatography often utilizes a biomolecule's affinity for the cations of a metal (Zn, Cu, Fe, etc.). Columns are often manually prepared and could be designed specifically for the proteins of interest. Traditional affinity columns are used as a preparative step to flush out unwanted biomolecules, or as a primary step in analyzing a protein with unknown physical properties.
However, liquid chromatography techniques exist that do utilize affinity chromatography properties. Immobilized metal affinity chromatography (IMAC) is useful to separate the aforementioned molecules based on the relative affinity for the metal. Often these columns can be loaded with different metals to create a column with a targeted affinity.
Ion exchange chromatography
Ion exchange chromatography (usually referred to as ion chromatography) uses an ion exchange mechanism to separate analytes based on their respective charges. It is usually performed in columns but can also be useful in planar mode. Ion exchange chromatography uses a charged stationary phase to separate charged compounds including anions, cations, amino acids, peptides, and proteins. In conventional methods the stationary phase is an ion-exchange resin that carries charged functional groups that interact with oppositely charged groups of the compound to retain. There are two types of ion exchange chromatography: Cation-Exchange and Anion-Exchange. In the Cation-Exchange Chromatography the stationary phase has negative charge and the exchangeable ion is a cation, whereas, in the Anion-Exchange Chromatography the stationary phase has positive charge and the exchangeable ion is an anion. Ion exchange chromatography is commonly used to purify proteins using FPLC.
Size-exclusion chromatography
Size-exclusion chromatography (SEC) is also known as gel permeation chromatography (GPC) or gel filtration chromatography and separates molecules according to their size (or more accurately according to their hydrodynamic diameter or hydrodynamic volume).
Smaller molecules are able to enter the pores of the media and, therefore, molecules are trapped and removed from the flow of the mobile phase. The average residence time in the pores depends upon the effective size of the analyte molecules. However, molecules that are larger than the average pore size of the packing are excluded and thus suffer essentially no retention; such species are the first to be eluted. It is generally a low-resolution chromatography technique and thus it is often reserved for the final, "polishing" step of a purification. It is also useful for determining the tertiary structure and quaternary structure of purified proteins, especially since it can be carried out under native solution conditions.
Expanded bed adsorption chromatographic separation
An expanded bed chromatographic adsorption (EBA) column for a biochemical separation process comprises a pressure equalization liquid distributor having a self-cleaning function below a porous blocking sieve plate at the bottom of the expanded bed, an upper part nozzle assembly having a backflush cleaning function at the top of the expanded bed, a better distribution of the feedstock liquor added into the expanded bed ensuring that the fluid passed through the expanded bed layer displays a state of piston flow. The expanded bed layer displays a state of piston flow. The expanded bed chromatographic separation column has advantages of increasing the separation efficiency of the expanded bed.
Expanded-bed adsorption (EBA) chromatography is a convenient and effective technique for the capture of proteins directly from unclarified crude sample. In EBA chromatography, the settled bed is first expanded by upward flow of equilibration buffer. The crude feed, which is a mixture of soluble proteins, contaminants, cells, and cell debris, is then passed upward through the expanded bed. Target proteins are captured on the adsorbent, while particulates and contaminants pass through. A change to elution buffer while maintaining upward flow results in desorption of the target protein in expanded-bed mode. Alternatively, if the flow is reversed, the adsorbed particles will quickly settle and the proteins can be desorbed by an elution buffer. The mode used for elution (expanded-bed versus settled-bed) depends on the characteristics of the feed. After elution, the adsorbent is cleaned with a predefined cleaning-in-place (CIP) solution, with cleaning followed by either column regeneration (for further use) or storage.
Special techniques
Reversed-phase chromatography
Reversed-phase chromatography (RPC) is any liquid chromatography procedure in which the mobile phase is significantly more polar than the stationary phase. It is so named because in normal-phase liquid chromatography, the mobile phase is significantly less polar than the stationary phase. Hydrophobic molecules in the mobile phase tend to adsorb to the relatively hydrophobic stationary phase. Hydrophilic molecules in the mobile phase will tend to elute first. Separating columns typically comprise a C8 or C18 carbon-chain bonded to a silica particle substrate.
Hydrophobic interaction chromatography
Hydrophobic Interaction Chromatography (HIC) is a purification and analytical technique that separates analytes, such as proteins, based on hydrophobic interactions between that analyte and the chromatographic matrix. It can provide a non-denaturing orthogonal approach to reversed phase separation, preserving native structures and potentially protein activity. In hydrophobic interaction chromatography, the matrix material is lightly substituted with hydrophobic groups. These groups can range from methyl, ethyl, propyl, butyl, octyl, or phenyl groups. At high salt concentrations, non-polar sidechains on the surface on proteins "interact" with the hydrophobic groups; that is, both types of groups are excluded by the polar solvent (hydrophobic effects are augmented by increased ionic strength). Thus, the sample is applied to the column in a buffer which is highly polar, which drives an association of hydrophobic patches on the analyte with the stationary phase. The eluent is typically an aqueous buffer with decreasing salt concentrations, increasing concentrations of detergent (which disrupts hydrophobic interactions), or changes in pH. Of critical importance is the type of salt used, with more kosmotropic salts as defined by the Hofmeister series providing the most water structuring around the molecule and resulting hydrophobic pressure. Ammonium sulfate is frequently used for this purpose. The addition of organic solvents or other less polar constituents may assist in improving resolution.
In general, Hydrophobic Interaction Chromatography (HIC) is advantageous if the sample is sensitive to pH change or harsh solvents typically used in other types of chromatography but not high salt concentrations. Commonly, it is the amount of salt in the buffer which is varied. In 2012, Müller and Franzreb described the effects of temperature on HIC using Bovine Serum Albumin (BSA) with four different types of hydrophobic resin. The study altered temperature as to effect the binding affinity of BSA onto the matrix. It was concluded that cycling temperature from 40 to 10 degrees Celsius would not be adequate to effectively wash all BSA from the matrix but could be very effective if the column would only be used a few times. Using temperature to effect change allows labs to cut costs on buying salt and saves money.
If high salt concentrations along with temperature fluctuations want to be avoided one can use a more hydrophobic to compete with one's sample to elute it. This so-called salt independent method of HIC showed a direct isolation of Human Immunoglobulin G (IgG) from serum with satisfactory yield and used β-cyclodextrin as a competitor to displace IgG from the matrix. This largely opens up the possibility of using HIC with samples which are salt sensitive as we know high salt concentrations precipitate proteins.
Hydrodynamic chromatography
Hydrodynamic chromatography (HDC) is derived from the observed phenomenon that large droplets move faster than small ones. In a column, this happens because the center of mass of larger droplets is prevented from being as close to the sides of the column as smaller droplets because of their larger overall size. Larger droplets will elute first from the middle of the column while smaller droplets stick to the sides of the column and elute last. This form of chromatography is useful for separating analytes by molar mass (or molecular mass), size, shape, and structure when used in conjunction with light scattering detectors, viscometers, and refractometers. The two main types of HDC are open tube and packed column. Open tube offers rapid separation times for small particles, whereas packed column HDC can increase resolution and is better suited for particles with an average molecular mass larger than daltons. HDC differs from other types of chromatography because the separation only takes place in the interstitial volume, which is the volume surrounding and in between particles in a packed column.
HDC shares the same order of elution as Size Exclusion Chromatography (SEC) but the two processes still vary in many ways. In a study comparing the two types of separation, Isenberg, Brewer, Côté, and Striegel use both methods for polysaccharide characterization and conclude that HDC coupled with multiangle light scattering (MALS) achieves more accurate molar mass distribution when compared to off-line MALS than SEC in significantly less time. This is largely due to SEC being a more destructive technique because of the pores in the column degrading the analyte during separation, which tends to impact the mass distribution. However, the main disadvantage of HDC is low resolution of analyte peaks, which makes SEC a more viable option when used with chemicals that are not easily degradable and where rapid elution is not important.
HDC plays an especially important role in the field of microfluidics. The first successful apparatus for HDC-on-a-chip system was proposed by Chmela, et al. in 2002. Their design was able to achieve separations using an 80 mm long channel on the timescale of 3 minutes for particles with diameters ranging from 26 to 110 nm, but the authors expressed a need to improve the retention and dispersion parameters. In a 2010 publication by Jellema, Markesteijn, Westerweel, and Verpoorte, implementing HDC with a recirculating bidirectional flow resulted in high resolution, size based separation with only a 3 mm long channel. Having such a short channel and high resolution was viewed as especially impressive considering that previous studies used channels that were 80 mm in length. For a biological application, in 2007, Huh, et al. proposed a microfluidic sorting device based on HDC and gravity, which was useful for preventing potentially dangerous particles with diameter larger than 6 microns from entering the bloodstream when injecting contrast agents in ultrasounds. This study also made advances for environmental sustainability in microfluidics due to the lack of outside electronics driving the flow, which came as an advantage of using a gravity based device.
Two-dimensional chromatography
In some cases, the selectivity provided by the use of one column can be insufficient to provide resolution of analytes in complex samples. Two-dimensional chromatography aims to increase the resolution of these peaks by using a second column with different physico-chemical (chemical classification) properties. Since the mechanism of retention on this new solid support is different from the first dimensional separation, it can be possible to separate compounds by two-dimensional chromatography that are indistinguishable by one-dimensional chromatography. Furthermore, the separation on the second dimension occurs faster than the first dimension. An example of a TDC separation is where the sample is spotted at one corner of a square plate, developed, air-dried, then rotated by 90° and usually redeveloped in a second solvent system.
Two-dimensional chromatography can be applied to GC or LC separations. The heart-cutting approach selects a specific region of interest on the first dimension for separation, and the comprehensive approach uses all analytes in the second-dimension separation.
Simulated moving-bed chromatography
The simulated moving bed (SMB) technique is a variant of high performance liquid chromatography; it is used to separate particles and/or chemical compounds that would be difficult or impossible to resolve otherwise. This increased separation is brought about by a valve-and-column arrangement that is used to lengthen the stationary phase indefinitely.
In the moving bed technique of preparative chromatography the feed entry and the analyte recovery are simultaneous and continuous, but because of practical difficulties with a continuously moving bed, simulated moving bed technique was proposed. In the simulated moving bed technique instead of moving the bed, the sample inlet and the analyte exit positions are moved continuously, giving the impression of a moving bed.
True moving bed chromatography (TMBC) is only a theoretical concept. Its simulation, SMBC is achieved by the use of a multiplicity of columns in series and a complex valve arrangement. This valve arrangement provides for sample and solvent feed and analyte and waste takeoff at appropriate locations of any column, whereby it allows switching at regular intervals the sample entry in one direction, the solvent entry in the opposite direction, whilst changing the analyte and waste takeoff positions appropriately as well.
Pyrolysis gas chromatography
Pyrolysis–gas chromatography–mass spectrometry is a method of chemical analysis in which the sample is heated to decomposition to produce smaller molecules that are separated by gas chromatography and detected using mass spectrometry.
Pyrolysis is the thermal decomposition of materials in an inert atmosphere or a vacuum. The sample is put into direct contact with a platinum wire, or placed in a quartz sample tube, and rapidly heated to 600–1000 °C. Depending on the application even higher temperatures are used. Three different heating techniques are used in actual pyrolyzers: Isothermal furnace, inductive heating (Curie point filament), and resistive heating using platinum filaments. Large molecules cleave at their weakest points and produce smaller, more volatile fragments. These fragments can be separated by gas chromatography. Pyrolysis GC chromatograms are typically complex because a wide range of different decomposition products is formed. The data can either be used as fingerprints to prove material identity or the GC/MS data is used to identify individual fragments to obtain structural information. To increase the volatility of polar fragments, various methylating reagents can be added to a sample before pyrolysis.
Besides the usage of dedicated pyrolyzers, pyrolysis GC of solid and liquid samples can be performed directly inside Programmable Temperature Vaporizer (PTV) injectors that provide quick heating (up to 30 °C/s) and high maximum temperatures of 600–650 °C. This is sufficient for some pyrolysis applications. The main advantage is that no dedicated instrument has to be purchased and pyrolysis can be performed as part of routine GC analysis. In this case, quartz GC inlet liners have to be used. Quantitative data can be acquired, and good results of derivatization inside the PTV injector are published as well.
Fast protein liquid chromatography
Fast protein liquid chromatography (FPLC), is a form of liquid chromatography that is often used to analyze or purify mixtures of proteins. As in other forms of chromatography, separation is possible because the different components of a mixture have different affinities for two materials, a moving fluid (the "mobile phase") and a porous solid (the stationary phase). In FPLC the mobile phase is an aqueous solution, or "buffer". The buffer flow rate is controlled by a positive-displacement pump and is normally kept constant, while the composition of the buffer can be varied by drawing fluids in different proportions from two or more external reservoirs. The stationary phase is a resin composed of beads, usually of cross-linked agarose, packed into a cylindrical glass or plastic column. FPLC resins are available in a wide range of bead sizes and surface ligands depending on the application.
Countercurrent chromatography
Countercurrent chromatography (CCC) is a type of liquid-liquid chromatography, where both the stationary and mobile phases are liquids and the liquid stationary phase is held stagnant by a strong centrifugal force.
Hydrodynamic countercurrent chromatography (CCC)
The operating principle of CCC instrument requires a column consisting of an open tube coiled around a bobbin. The bobbin is rotated in a double-axis gyratory motion (a cardioid), which causes a variable gravity (G) field to act on the column during each rotation. This motion causes the column to see one partitioning step per revolution and components of the sample separate in the column due to their partitioning coefficient between the two immiscible liquid phases used. There are many types of CCC available today. These include HSCCC (High Speed CCC) and HPCCC (High Performance CCC). HPCCC is the latest and best-performing version of the instrumentation available currently.
Centrifugal partition chromatography (CPC)
In the CPC (centrifugal partition chromatography or hydrostatic countercurrent chromatography) instrument, the column consists of a series of cells interconnected by ducts attached to a rotor. This rotor rotates on its central axis creating the centrifugal field necessary to hold the stationary phase in place. The separation process in CPC is governed solely by the partitioning of solutes between the stationary and mobile phases, which mechanism can be easily described using the partition coefficients (KD) of solutes. CPC instruments are commercially available for laboratory, pilot, and industrial-scale separations with different sizes of columns ranging from some 10 milliliters to 10 liters in volume.
Periodic counter-current chromatography
In contrast to Counter current chromatography (see above), periodic counter-current chromatography (PCC) uses a solid stationary phase and only a liquid mobile phase. It thus is much more similar to conventional affinity chromatography than to counter current chromatography. PCC uses multiple columns, which during the loading phase are connected in line. This mode allows for overloading the first column in this series without losing product, which already breaks through the column before the resin is fully saturated. The breakthrough product is captured on the subsequent column(s). In a next step the columns are disconnected from one another. The first column is washed and eluted, while the other column(s) are still being loaded. Once the (initially) first column is re-equilibrated, it is re-introduced to the loading stream, but as last column. The process then continues in a cyclic fashion.
Chiral chromatography
Chiral chromatography involves the separation of stereoisomers. In the case of enantiomers, these have no chemical or physical differences apart from being three-dimensional mirror images. To enable chiral separations to take place, either the mobile phase or the stationary phase must themselves be made chiral, giving differing affinities between the analytes. Chiral chromatography HPLC columns (with a chiral stationary phase) in both normal and reversed phase are commercially available.
Conventional chromatography are incapable of separating racemic mixtures of enantiomers. However, in some cases nonracemic mixtures of enantiomers may be separated unexpectedly by conventional liquid chromatography (e.g. HPLC without chiral mobile phase or stationary phase ).
Aqueous normal-phase chromatography
Aqueous normal-phase (ANP) chromatography is characterized by the elution behavior of classical normal phase mode (i.e. where the mobile phase is significantly less polar than the stationary phase) in which water is one of the mobile phase solvent system components. It is distinguished from hydrophilic interaction liquid chromatography (HILIC) in that the retention mechanism is due to adsorption rather than partitioning.
Applications
Chromatography is used in many fields including the pharmaceutical industry, the food and beverage industry, the chemical industry, forensic science, environment analysis, and hospitals.
See also
Affinity chromatography
Aqueous normal-phase chromatography
Binding selectivity
Chiral analysis
Chromatofocusing
Chromatography in blood processing
Chromatography software
Glowmatography
Multicolumn countercurrent solvent gradient purification (MCSGP)
Purnell equation
Van Deemter equation
References
External links
IUPAC Nomenclature for Chromatography
Overlapping Peaks Program – Learning by Simulations
Chromatography Videos – MIT OCW – Digital Lab Techniques Manual
Chromatography Equations Calculators – MicroSolv Technology Corporation
Chemical pathology
Biological techniques and tools
Russian inventions | Chromatography | [
"Chemistry",
"Biology"
] | 8,085 | [
"Chromatography",
"Separation processes",
"nan",
"Biochemistry",
"Chemical pathology"
] |
6,339 | https://en.wikipedia.org/wiki/Cell%20biology | Cell biology (also cellular biology or cytology) is a branch of biology that studies the structure, function, and behavior of cells. All living organisms are made of cells. A cell is the basic unit of life that is responsible for the living and functioning of organisms. Cell biology is the study of the structural and functional units of cells. Cell biology encompasses both prokaryotic and eukaryotic cells and has many subtopics which may include the study of cell metabolism, cell communication, cell cycle, biochemistry, and cell composition. The study of cells is performed using several microscopy techniques, cell culture, and cell fractionation. These have allowed for and are currently being used for discoveries and research pertaining to how cells function, ultimately giving insight into understanding larger organisms. Knowing the components of cells and how cells work is fundamental to all biological sciences while also being essential for research in biomedical fields such as cancer, and other diseases. Research in cell biology is interconnected to other fields such as genetics, molecular genetics, molecular biology, medical microbiology, immunology, and cytochemistry.
History
Cells were first seen in 17th-century Europe with the invention of the compound microscope. In 1665, Robert Hooke referred to the building blocks of all living organisms as "cells" (published in Micrographia) after looking at a piece of cork and observing a structure reminiscent of a monastic cell; however, the cells were dead. They gave no indication to the actual overall components of a cell. A few years later, in 1674, Anton Van Leeuwenhoek was the first to analyze live cells in his examination of algae. Many years later, in 1831, Robert Brown discovered the nucleus. All of this preceded the cell theory which states that all living things are made up of cells and that cells are organisms' functional and structural units. This was ultimately concluded by plant scientist Matthias Schleiden and animal scientist Theodor Schwann in 1838, who viewed live cells in plant and animal tissue, respectively. 19 years later, Rudolf Virchow further contributed to the cell theory, adding that all cells come from the division of pre-existing cells. Viruses are not considered in cell biology – they lack the characteristics of a living cell and instead are studied in the microbiology subclass of virology.
Techniques
Cell biology research looks at different ways to culture and manipulate cells outside of a living body to further research in human anatomy and physiology, and to derive medications. The techniques by which cells are studied have evolved. Due to advancements in microscopy, techniques and technology have allowed scientists to hold a better understanding of the structure and function of cells. Many techniques commonly used to study cell biology are listed below:
Cell culture: Utilizes rapidly growing cells on media which allows for a large amount of a specific cell type and an efficient way to study cells. Cell culture is one of the major tools used in cellular and molecular biology, providing excellent model systems for studying the normal physiology and biochemistry of cells (e.g., metabolic studies, aging), the effects of drugs and toxic compounds on the cells, and mutagenesis and carcinogenesis. It is also used in drug screening and development, and large scale manufacturing of biological compounds (e.g., vaccines, therapeutic proteins).
Fluorescence microscopy: Fluorescent markers such as GFP, are used to label a specific component of the cell. Afterwards, a certain light wavelength is used to excite the fluorescent marker which can then be visualized.
Phase-contrast microscopy: Uses the optical aspect of light to represent the solid, liquid, and gas-phase changes as brightness differences.
Confocal microscopy: Combines fluorescence microscopy with imaging by focusing light and snap shooting instances to form a 3-D image.
Transmission electron microscopy: Involves metal staining and the passing of electrons through the cells, which will be deflected upon interaction with metal. This ultimately forms an image of the components being studied.
Cytometry: The cells are placed in the machine which uses a beam to scatter the cells based on different aspects and can therefore separate them based on size and content. Cells may also be tagged with GFP-fluorescence and can be separated that way as well.
Cell fractionation: This process requires breaking up the cell using high temperature or sonification followed by centrifugation to separate the parts of the cell allowing for them to be studied separately.
Cell types
There are two fundamental classifications of cells: prokaryotic and eukaryotic. Prokaryotic cells are distinguished from eukaryotic cells by the absence of a cell nucleus or other membrane-bound organelle. Prokaryotic cells are much smaller than eukaryotic cells, making them the smallest form of life. Prokaryotic cells include Bacteria and Archaea, and lack an enclosed cell nucleus. Eukaryotic cells are found in plants, animals, fungi, and protists. They range from 10 to 100 μm in diameter, and their DNA is contained within a membrane-bound nucleus. Eukaryotes are organisms containing eukaryotic cells. The four eukaryotic kingdoms are Animalia, Plantae, Fungi, and Protista.
They both reproduce through binary fission. Bacteria, the most prominent type, have several different shapes, although most are spherical or rod-shaped. Bacteria can be classed as either gram-positive or gram-negative depending on the cell wall composition. Gram-positive bacteria have a thicker peptidoglycan layer than gram-negative bacteria. Bacterial structural features include a flagellum that helps the cell to move, ribosomes for the translation of RNA to protein, and a nucleoid that holds all the genetic material in a circular structure. There are many processes that occur in prokaryotic cells that allow them to survive. In prokaryotes, mRNA synthesis is initiated at a promoter sequence on the DNA template comprising two consensus sequences that recruit RNA polymerase. The prokaryotic polymerase consists of a core enzyme of four protein subunits and a σ protein that assists only with initiation. For instance, in a process termed conjugation, the fertility factor allows the bacteria to possess a pilus which allows it to transmit DNA to another bacteria which lacks the F factor, permitting the transmittance of resistance allowing it to survive in certain environments.
Structure and function
Structure of eukaryotic cells
Eukaryotic cells are composed of the following organelles:
Nucleus: The nucleus of the cell functions as the genome and genetic information storage for the cell, containing all the DNA organized in the form of chromosomes. It is surrounded by a nuclear envelope, which includes nuclear pores allowing for the transportation of proteins between the inside and outside of the nucleus. This is also the site for replication of DNA as well as transcription of DNA to RNA. Afterwards, the RNA is modified and transported out to the cytosol to be translated to protein.
Nucleolus: This structure is within the nucleus, usually dense and spherical. It is the site of ribosomal RNA (rRNA) synthesis, which is needed for ribosomal assembly.
Endoplasmic reticulum (ER): This functions to synthesize, store, and secrete proteins to the Golgi apparatus. Structurally, the endoplasmic reticulum is a network of membranes found throughout the cell and connected to the nucleus. The membranes are slightly different from cell to cell and a cell's function determines the size and structure of the ER.
Mitochondria: Commonly known as the powerhouse of the cell is a double membrane bound cell organelle. This functions for the production of energy or ATP within the cell. Specifically, this is the place where the Krebs cycle or TCA cycle for the production of NADH and FADH occurs. Afterwards, these products are used within the electron transport chain (ETC) and oxidative phosphorylation for the final production of ATP.
Golgi apparatus: This functions to further process, package, and secrete the proteins to their destination. The proteins contain a signal sequence that allows the Golgi apparatus to recognize and direct it to the correct place. Golgi apparatus also produce glycoproteins and glycolipids.
Lysosome: The lysosome functions to degrade material brought in from the outside of the cell or old organelles. This contains many acid hydrolases, proteases, nucleases, and lipases, which break down the various molecules. Autophagy is the process of degradation through lysosomes which occurs when a vesicle buds off from the ER and engulfs the material, then, attaches and fuses with the lysosome to allow the material to be degraded.
Ribosomes: Functions to translate RNA to protein. it serves as a site of protein synthesis.
Cytoskeleton: Cytoskeleton is a structure that helps to maintain the shape and general organization of the cytoplasm. It anchors organelles within the cells and makes up the structure and stability of the cell. The cytoskeleton is composed of three principal types of protein filaments: actin filaments, intermediate filaments, and microtubules, which are held together and linked to subcellular organelles and the plasma membrane by a variety of accessory proteins.
Cell membrane: The cell membrane can be described as a phospholipid bilayer and is also consisted of lipids and proteins. Because the inside of the bilayer is hydrophobic and in order for molecules to participate in reactions within the cell, they need to be able to cross this membrane layer to get into the cell via osmotic pressure, diffusion, concentration gradients, and membrane channels.
Centrioles: Function to produce spindle fibers which are used to separate chromosomes during cell division.
Eukaryotic cells may also be composed of the following molecular components:
Chromatin: This makes up chromosomes and is a mixture of DNA with various proteins.
Cilia: They help to propel substances and can also be used for sensory purposes.
Cell metabolism
Cell metabolism is necessary for the production of energy for the cell and therefore its survival and includes many pathways and also sustaining the main cell organelles such as the nucleus, the mitochondria, the cell membrane etc. For cellular respiration, once glucose is available, glycolysis occurs within the cytosol of the cell to produce pyruvate. Pyruvate undergoes decarboxylation using the multi-enzyme complex to form acetyl coA which can readily be used in the TCA cycle to produce NADH and FADH2. These products are involved in the electron transport chain to ultimately form a proton gradient across the inner mitochondrial membrane. This gradient can then drive the production of ATP and during oxidative phosphorylation. Metabolism in plant cells includes photosynthesis which is simply the exact opposite of respiration as it ultimately produces molecules of glucose.
Cell signaling
Cell signaling or cell communication is important for cell regulation and for cells to process information from the environment and respond accordingly. Signaling can occur through direct cell contact or endocrine, paracrine, and autocrine signaling. Direct cell-cell contact is when a receptor on a cell binds a molecule that is attached to the membrane of another cell. Endocrine signaling occurs through molecules secreted into the bloodstream. Paracrine signaling uses molecules diffusing between two cells to communicate. Autocrine is a cell sending a signal to itself by secreting a molecule that binds to a receptor on its surface. Forms of communication can be through:
Ion channels: Can be of different types such as voltage or ligand gated ion channels. They allow for the outflow and inflow of molecules and ions.
G-protein coupled receptor (GPCR): Is widely recognized to contain seven transmembrane domains. The ligand binds on the extracellular domain and once the ligand binds, this signals a guanine exchange factor to convert GDP to GTP and activate the G-α subunit. G-α can target other proteins such as adenyl cyclase or phospholipase C, which ultimately produce secondary messengers such as cAMP, Ip3, DAG, and calcium. These secondary messengers function to amplify signals and can target ion channels or other enzymes. One example for amplification of a signal is cAMP binding to and activating PKA by removing the regulatory subunits and releasing the catalytic subunit. The catalytic subunit has a nuclear localization sequence which prompts it to go into the nucleus and phosphorylate other proteins to either repress or activate gene activity.
Receptor tyrosine kinases: Bind growth factors, further promoting the tyrosine on the intracellular portion of the protein to cross phosphorylate. The phosphorylated tyrosine becomes a landing pad for proteins containing an SH2 domain allowing for the activation of Ras and the involvement of the MAP kinase pathway.
Growth and development
Eukaryotic cell cycle
Cells are the foundation of all organisms and are the fundamental units of life. The growth and development of cells are essential for the maintenance of the host and survival of the organism. For this process, the cell goes through the steps of the cell cycle and development which involves cell growth, DNA replication, cell division, regeneration, and cell death.
The cell cycle is divided into four distinct phases: G1, S, G2, and M. The G phase – which is the cell growth phase – makes up approximately 95% of the cycle. The proliferation of cells is instigated by progenitors. All cells start out in an identical form and can essentially become any type of cells. Cell signaling such as induction can influence nearby cells to determinate the type of cell it will become. Moreover, this allows cells of the same type to aggregate and form tissues, then organs, and ultimately systems. The G1, G2, and S phase (DNA replication, damage and repair) are considered to be the interphase portion of the cycle, while the M phase (mitosis) is the cell division portion of the cycle. Mitosis is composed of many stages which include, prophase, metaphase, anaphase, telophase, and cytokinesis, respectively. The ultimate result of mitosis is the formation of two identical daughter cells.
The cell cycle is regulated in cell cycle checkpoints, by a series of signaling factors and complexes such as cyclins, cyclin-dependent kinase, and p53. When the cell has completed its growth process and if it is found to be damaged or altered, it undergoes cell death, either by apoptosis or necrosis, to eliminate the threat it can cause to the organism's survival.
Cell mortality, cell lineage immortality
The ancestry of each present day cell presumably traces back, in an unbroken lineage for over 3 billion years to the origin of life. It is not actually cells that are immortal but multi-generational cell lineages. The immortality of a cell lineage depends on the maintenance of cell division potential. This potential may be lost in any particular lineage because of cell damage, terminal differentiation as occurs in nerve cells, or programmed cell death (apoptosis) during development. Maintenance of cell division potential over successive generations depends on the avoidance and the accurate repair of cellular damage, particularly DNA damage. In sexual organisms, continuity of the germline depends on the effectiveness of processes for avoiding DNA damage and repairing those DNA damages that do occur. Sexual processes in eukaryotes, as well as in prokaryotes, provide an opportunity for effective repair of DNA damages in the germ line by homologous recombination.
Cell cycle phases
The cell cycle is a four-stage process that a cell goes through as it develops and divides. It includes Gap 1 (G1), synthesis (S), Gap 2 (G2), and mitosis (M). The cell either restarts the cycle from G1 or leaves the cycle through G0 after completing the cycle. The cell can progress from G0 through terminal differentiation. Finally, the interphase refers to the phases of the cell cycle that occur between one mitosis and the next, and includes G1, S, and G2. Thus, the phases are:
G1 phase: the cell grows in size and its contents are replicated.
S phase: the cell replicates each of the 46 chromosomes.
G2 phase: in preparation for cell division, new organelles and proteins form.
M phase: cytokinesis occurs, resulting in two identical daughter cells.
G0 phase: the two cells enter a resting stage where they do their job without actively preparing to divide.
Pathology
The scientific branch that studies and diagnoses diseases on the cellular level is called cytopathology. Cytopathology is generally used on samples of free cells or tissue fragments, in contrast to the pathology branch of histopathology, which studies whole tissues. Cytopathology is commonly used to investigate diseases involving a wide range of body sites, often to aid in the diagnosis of cancer but also in the diagnosis of some infectious diseases and other inflammatory conditions. For example, a common application of cytopathology is the Pap smear, a screening test used to detect cervical cancer, and precancerous cervical lesions that may lead to cervical cancer.
Cell cycle checkpoints and DNA damage repair system
The cell cycle is composed of a number of well-ordered, consecutive stages that result in cellular division. The fact that cells do not begin the next stage until the last one is finished, is a significant element of cell cycle regulation. Cell cycle checkpoints are characteristics that constitute an excellent monitoring strategy for accurate cell cycle and divisions. Cdks, associated cyclin counterparts, protein kinases, and phosphatases regulate cell growth and division from one stage to another. The cell cycle is controlled by the temporal activation of Cdks, which is governed by cyclin partner interaction, phosphorylation by particular protein kinases, and de-phosphorylation by Cdc25 family phosphatases. In response to DNA damage, a cell's DNA repair reaction is a cascade of signaling pathways that leads to checkpoint engagement, regulates, the repairing mechanism in DNA, cell cycle alterations, and apoptosis. Numerous biochemical structures, as well as processes that detect damage in DNA, are ATM and ATR, which induce the DNA repair checkpoints
The cell cycle is a sequence of activities in which cell organelles are duplicated and subsequently separated into daughter cells with precision. There are major events that happen during a cell cycle. The processes that happen in the cell cycle include cell development, replication and segregation of chromosomes. The cell cycle checkpoints are surveillance systems that keep track of the cell cycle's integrity, accuracy, and chronology. Each checkpoint serves as an alternative cell cycle endpoint, wherein the cell's parameters are examined and only when desirable characteristics are fulfilled does the cell cycle advance through the distinct steps. The cell cycle's goal is to precisely copy each organism's DNA and afterwards equally split the cell and its components between the two new cells. Four main stages occur in the eukaryotes. In G1, the cell is usually active and continues to grow rapidly, while in G2, the cell growth continues while protein molecules become ready for separation. These are not dormant times; they are when cells gain mass, integrate growth factor receptors, establish a replicated genome, and prepare for chromosome segregation. DNA replication is restricted to a separate Synthesis in eukaryotes, which is also known as the S-phase. During mitosis, which is also known as the M-phase, the segregation of the chromosomes occur. DNA, like every other molecule, is capable of undergoing a wide range of chemical reactions. Modifications in DNA's sequence, on the other hand, have a considerably bigger impact than modifications in other cellular constituents like RNAs or proteins because DNA acts as a permanent copy of the cell genome. When erroneous nucleotides are incorporated during DNA replication, mutations can occur. The majority of DNA damage is fixed by removing the defective bases and then re-synthesizing the excised area. On the other hand, some DNA lesions can be mended by reversing the damage, which may be a more effective method of coping with common types of DNA damage. Only a few forms of DNA damage are mended in this fashion, including pyrimidine dimers caused by ultraviolet (UV) light changed by the insertion of methyl or ethyl groups at the purine ring's O6 position.
Mitochondrial membrane dynamics
Mitochondria are commonly referred to as the cell's "powerhouses" because of their capacity to effectively produce ATP which is essential to maintain cellular homeostasis and metabolism. Moreover, researchers have gained a better knowledge of mitochondria's significance in cell biology because of the discovery of cell signaling pathways by mitochondria which are crucial platforms for cell function regulation such as apoptosis. Its physiological adaptability is strongly linked to the cell mitochondrial channel's ongoing reconfiguration through a range of mechanisms known as mitochondrial membrane dynamics, including endomembrane fusion and fragmentation (separation) and ultrastructural membrane remodeling. As a result, mitochondrial dynamics regulate and frequently choreograph not only metabolic but also complicated cell signaling processes such as cell pluripotent stem cells, proliferation, maturation, aging, and mortality. Mutually, post-translational alterations of mitochondrial apparatus and the development of transmembrane contact sites among mitochondria and other structures, which both have the potential to link signals from diverse routes that affect mitochondrial membrane dynamics substantially, Mitochondria are wrapped by two membranes: an inner mitochondrial membrane (IMM) and an outer mitochondrial membrane (OMM), each with a distinctive function and structure, which parallels their dual role as cellular powerhouses and signaling organelles. The inner mitochondrial membrane divides the mitochondrial lumen into two parts: the inner border membrane, which runs parallel to the OMM, and the cristae, which are deeply twisted, multinucleated invaginations that give room for surface area enlargement and house the mitochondrial respiration apparatus. The outer mitochondrial membrane, on the other hand, is soft and permeable. It, therefore, acts as a foundation for cell signaling pathways to congregate, be deciphered, and be transported into mitochondria. Furthermore, the OMM connects to other cellular organelles, such as the endoplasmic reticulum (ER), lysosomes, endosomes, and the plasma membrane. Mitochondria play a wide range of roles in cell biology, which is reflected in their morphological diversity. Ever since the beginning of the mitochondrial study, it has been well documented that mitochondria can have a variety of forms, with both their general and ultra-structural morphology varying greatly among cells, during the cell cycle, and in response to metabolic or cellular cues. Mitochondria can exist as independent organelles or as part of larger systems; they can also be unequally distributed in the cytosol through regulated mitochondrial transport and placement to meet the cell's localized energy requirements. Mitochondrial dynamics refers to the adaptive and variable aspect of mitochondria, including their shape and subcellular distribution.
Autophagy
Autophagy is a self-degradative mechanism that regulates energy sources during growth and reaction to dietary stress. Autophagy also cleans up after itself, clearing aggregated proteins, cleaning damaged structures including mitochondria and endoplasmic reticulum and eradicating intracellular infections. Additionally, autophagy has antiviral and antibacterial roles within the cell, and it is involved at the beginning of distinctive and adaptive immune responses to viral and bacterial contamination. Some viruses include virulence proteins that prevent autophagy, while others utilize autophagy elements for intracellular development or cellular splitting. Macro autophagy, micro autophagy, and chaperon-mediated autophagy are the three basic types of autophagy. When macro autophagy is triggered, an exclusion membrane incorporates a section of the cytoplasm, generating the autophagosome, a distinctive double-membraned organelle. The autophagosome then joins the lysosome to create an autolysosome, with lysosomal enzymes degrading the components. In micro autophagy, the lysosome or vacuole engulfs a piece of the cytoplasm by invaginating or protruding the lysosomal membrane to enclose the cytosol or organelles. The chaperone-mediated autophagy (CMA) protein quality assurance by digesting oxidized and altered proteins under stressful circumstances and supplying amino acids through protein denaturation. Autophagy is the primary intrinsic degradative system for peptides, fats, carbohydrates, and other cellular structures. In both physiologic and stressful situations, this cellular progression is vital for upholding the correct cellular balance. Autophagy instability leads to a variety of illness symptoms, including inflammation, biochemical disturbances, aging, and neurodegenerative, due to its involvement in controlling cell integrity. The modification of the autophagy-lysosomal networks is a typical hallmark of many neurological and muscular illnesses. As a result, autophagy has been identified as a potential strategy for the prevention and treatment of various disorders. Many of these disorders are prevented or improved by consuming polyphenol in the meal. As a result, natural compounds with the ability to modify the autophagy mechanism are seen as a potential therapeutic option. The creation of the double membrane (phagophore), which would be known as nucleation, is the first step in macro-autophagy. The phagophore approach indicates dysregulated polypeptides or defective organelles that come from the cell membrane, Golgi apparatus, endoplasmic reticulum, and mitochondria. With the conclusion of the autophagocyte, the phagophore's enlargement comes to an end. The auto-phagosome combines with the lysosomal vesicles to formulate an auto-lysosome that degrades the encapsulated substances, referred to as phagocytosis.
Notable cell biologists
Jean Baptiste Carnoy
Peter Agre
Günter Blobel
Robert Brown
Geoffrey M. Cooper
Christian de Duve
Henri Dutrochet
Robert Hooke
H. Robert Horvitz
Marc Kirschner
Anton van Leeuwenhoek
Ira Mellman
Marta Miączyńska
Peter D. Mitchell
Rudolf Virchow
Paul Nurse
George Emil Palade
Keith R. Porter
Ray Rappaport
Michael Swann
Roger Tsien
Edmund Beecher Wilson
Kenneth R. Miller
Matthias Jakob Schleiden
Theodor Schwann
Yoshinori Ohsumi
Jan Evangelista Purkyně
See also
The American Society for Cell Biology
Cell biophysics
Cell disruption
Cell physiology
Cellular adaptation
Cellular microbiology
Institute of Molecular and Cell Biology (disambiguation)
Meiomitosis
Organoid
Outline of cell biology
Notes
References
electronic-book electronic-
Cell and Molecular Biology by Karp 5th Ed.,
External links
Aging Cell
"Francis Harry Compton Crick (1916–2004)" by A. Andrei at the Embryo Project Encyclopedia
"Biology Resource By Professor Lin." | Cell biology | [
"Chemistry",
"Biology"
] | 5,683 | [
"Biochemistry",
"Cell biology",
"Molecular biology"
] |
6,355 | https://en.wikipedia.org/wiki/Chloroplast | A chloroplast () is a type of organelle known as a plastid that conducts photosynthesis mostly in plant and algal cells. Chloroplasts have a high concentration of chlorophyll pigments which capture the energy from sunlight and convert it to chemical energy and release oxygen. The chemical energy created is then used to make sugar and other organic molecules from carbon dioxide in a process called the Calvin cycle. Chloroplasts carry out a number of other functions, including fatty acid synthesis, amino acid synthesis, and the immune response in plants. The number of chloroplasts per cell varies from one, in some unicellular algae, up to 100 in plants like Arabidopsis and wheat.
Chloroplasts are highly dynamic—they circulate and are moved around within cells. Their behavior is strongly influenced by environmental factors like light color and intensity. Chloroplasts cannot be made anew by the plant cell and must be inherited by each daughter cell during cell division, which is thought to be inherited from their ancestor—a photosynthetic cyanobacterium that was engulfed by an early eukaryotic cell.
Chloroplasts evolved from an ancient cyanobacterium that was engulfed by an early eukaryotic cell. Because of their endosymbiotic origins, chloroplasts, like mitochondria, contain their own DNA separate from the cell nucleus. With one exception (the amoeboid Paulinella chromatophora), all chloroplasts can be traced back to a single endosymbiotic event. Despite this, chloroplasts can be found in extremely diverse organisms that are not directly related to each other—a consequence of many secondary and even tertiary endosymbiotic events.
Discovery and etymology
The first definitive description of a chloroplast (Chlorophyllkörnen, "grain of chlorophyll") was given by Hugo von Mohl in 1837 as discrete bodies within the green plant cell. In 1883, Andreas Franz Wilhelm Schimper named these bodies as "chloroplastids" (Chloroplastiden). In 1884, Eduard Strasburger adopted the term "chloroplasts" (Chloroplasten).
The word chloroplast is derived from the Greek words chloros (χλωρός), which means green, and plastes (πλάστης), which means "the one who forms".
Endosymbiotic origin of chloroplasts
Chloroplasts are one of many types of organelles in photosynthetic eukaryotic cells. They evolved from cyanobacteria through a process called organellogenesis. Cyanobacteria are a diverse phylum of gram-negative bacteria capable of carrying out oxygenic photosynthesis. Like chloroplasts, they have thylakoids. The thylakoid membranes contain photosynthetic pigments, including chlorophyll a. This origin of chloroplasts was first suggested by the Russian biologist Konstantin Mereschkowski in 1905 after Andreas Franz Wilhelm Schimper observed in 1883 that chloroplasts closely resemble cyanobacteria. Chloroplasts are only found in plants, algae, and some species of the amoeboid Paulinella.
Mitochondria are thought to have come from a similar endosymbiosis event, where an aerobic prokaryote was engulfed.
Primary endosymbiosis
Approximately twobillion years ago, a free-living cyanobacterium entered an early eukaryotic cell, either as food or as an internal parasite, but managed to escape the phagocytic vacuole it was contained in and persist inside the cell. This event is called endosymbiosis, or "cell living inside another cell with a mutual benefit for both". The external cell is commonly referred to as the host while the internal cell is called the endosymbiont. The engulfed cyanobacteria provided an advantage to the host by providing sugar from photosynthesis. Over time, the cyanobacterium was assimilated, and many of its genes were lost or transferred to the nucleus of the host. Some of the cyanobacterial proteins were then synthesized by host cell and imported back into the chloroplast (formerly the cyanobacterium), allowing the host to control the chloroplast.
Chloroplasts which can be traced back directly to a cyanobacterial ancestor (i.e. without a subsequent endosymbiotic event) are known as primary plastids ("plastid" in this context means almost the same thing as chloroplast). Chloroplasts that can be traced back to another photosynthetic eukaryotic endosymbiont are called secondary plastids or tertiary plastids (discussed below).
Whether primary chloroplasts came from a single endosymbiotic event or multiple independent engulfments across various eukaryotic lineages was long debated. It is now generally held that with one exception (the amoeboid Paulinella chromatophora), chloroplasts arose from a single endosymbiotic event around twobillion years ago and these chloroplasts all share a single ancestor. It has been proposed this the closest living relative of the ancestral engulfed cyanobacterium is Gloeomargarita lithophora. Separately, somewhere about 90–140 million years ago, this process happened again in the amoeboid Paulinella with a cyanobacterium in the genus Prochlorococcus. This independently evolved chloroplast is often called a chromatophore instead of a chloroplast.
Chloroplasts are believed to have arisen after mitochondria, since all eukaryotes contain mitochondria, but not all have chloroplasts. This is called serial endosymbiosis—where an early eukaryote engulfed the mitochondrion ancestor, and then descendants of it then engulfed the chloroplast ancestor, creating a cell with both chloroplasts and mitochondria.
Secondary and tertiary endosymbiosis
Many other organisms obtained chloroplasts from the primary chloroplast lineages through secondary endosymbiosis—engulfing a red or green alga with a primary chloroplast. These chloroplasts are known as secondary plastids.
As a result of the secondary endosymbiotic event, secondary chloroplasts have additional membranes outside of the original two in primary chloroplasts. In secondary plastids, typically only the chloroplast, and sometimes its cell membrane and nucleus remain, forming a chloroplast with three or four membranes—the two cyanobacterial membranes, sometimes the eaten alga's cell membrane, and the phagosomal vacuole from the host's cell membrane.
The genes in the phagocytosed eukaryote's nucleus are often transferred to the secondary host's nucleus. Cryptomonads and chlorarachniophytes retain the phagocytosed eukaryote's nucleus, an object called a nucleomorph, located between the second and third membranes of the chloroplast.
All secondary chloroplasts come from green and red algae. No secondary chloroplasts from glaucophytes have been observed, probably because glaucophytes are relatively rare in nature, making them less likely to have been taken up by another eukaryote.
Still other organisms, including the dinoflagellates Karlodinium and Karenia, obtained chloroplasts by engulfing an organism with a secondary plastid. These are called tertiary plastids.
Primary chloroplast lineages
All primary chloroplasts belong to one of four chloroplast lineages—the glaucophyte chloroplast lineage, the rhodophyte ("red") chloroplast lineage, and the chloroplastidan ("green") chloroplast lineage, the amoeboid Paulinella chromatophora lineage. The glaucophyte, rhodophyte, and chloroplastidian lineages are all descended from the same ancestral endosymbiotic event and are all within the group Archaeplastida.
Glaucophyte chloroplasts
The glaucophyte chloroplast group is the smallest of the three primary chloroplast lineages as there are only 25 described glaucophyte species. Glaucophytes diverged first before the red and green chloroplast lineages diverged. Because of this, they are sometimes considered intermediates between cyanobacteria and the red and green chloroplasts. This early divergence is supported by both phylogenetic studies and physical features present in glaucophyte chloroplasts and cyanobacteria, but not the red and green chloroplasts. First, glaucophyte chloroplasts have a peptidoglycan wall, a type of cell wall otherwise only in bacteria (including cyanobacteria). Second, glaucophyte chloroplasts contain concentric unstacked thylakoids which surround a carboxysome – an icosahedral structure that contains the enzyme RuBisCO responsible for carbon fixation. Third, starch created by the chloroplast is collected outside the chloroplast. Additionally, like cyanobacteria, both glaucophyte and rhodophyte thylakoids are studded with light collecting structures called phycobilisomes.
Rhodophyta (red chloroplasts)
The rhodophyte, or red algae, group is a large and diverse lineage. Rhodophyte chloroplasts are also called rhodoplasts, literally "red chloroplasts". Rhodoplasts have a double membrane with an intermembrane space and phycobilin pigments organized into phycobilisomes on the thylakoid membranes, preventing their thylakoids from stacking. Some contain pyrenoids. Rhodoplasts have chlorophyll a and phycobilins for photosynthetic pigments; the phycobilin phycoerythrin is responsible for giving many red algae their distinctive red color. However, since they also contain the blue-green chlorophyll a and other pigments, many are reddish to purple from the combination. The red phycoerytherin pigment is an adaptation to help red algae catch more sunlight in deep water—as such, some red algae that live in shallow water have less phycoerythrin in their rhodoplasts, and can appear more greenish. Rhodoplasts synthesize a form of starch called floridean starch, which collects into granules outside the rhodoplast, in the cytoplasm of the red alga.
Chloroplastida (green chloroplasts)
The chloroplastida group is another large, highly diverse lineage that includes both green algae and land plants. This group is also called Viridiplantae, which includes two core clades—Chlorophyta and Streptophyta.
Most green chloroplasts are green in color, though some aren't due to accessory pigments that override the green from chlorophylls, such as in the resting cells of Haematococcus pluvialis. Green chloroplasts differ from glaucophyte and red algal chloroplasts in that they have lost their phycobilisomes, and contain chlorophyll b. They have also lost the peptidoglycan wall between their double membrane, leaving an intermembrane space. Some plants have kept some genes required the synthesis of peptidoglycan, but have repurposed them for use in chloroplast division instead. Chloroplastida lineages also keep their starch inside their chloroplasts. In plants and some algae, the chloroplast thylakoids are arranged in grana stacks. Some green algal chloroplasts, as well as those of hornworts, contain a structure called a pyrenoid, that concentrate RuBisCO and CO in the chloroplast, functionally similar to the glaucophyte carboxysome.
There are some lineages of non-photosynthetic parasitic green algae that have lost their chloroplasts entirely, such as Prototheca, or have no chloroplast while retaining the separate chloroplast genome, as in Helicosporidium. Morphological and physiological similarities, as well as phylogenetics, confirm that these are lineages that ancestrally had chloroplasts but have since lost them.
Paulinella chromatophora
The photosynthetic amoeboids in the genus Paulinella—P. chromatophora, P. micropora, and marine P. longichromatophora—have the only known independently evolved chloroplast, often called a chromatophore. While all other chloroplasts originate from a single ancient endosymbiotic event, Paulinella independently acquired an endosymbiotic cyanobacterium from the genus Synechococcus around 90 - 140 million years ago. Each Paulinella cell contains one or two sausage-shaped chloroplasts; they were first described in 1894 by German biologist Robert Lauterborn.
The chromatophore is highly reduced compared to its free-living cyanobacterial relatives and has limited functions. For example, it has a genome of about 1 million base pairs, one third the size of Synechococcus genomes, and only encodes around 850 proteins. However, this is still much larger than other chloroplast genomes, which are typically around 150,000 base pairs. Chromatophores have also transferred much less of their DNA to the nucleus of their hosts. About 0.3–0.8% of the nuclear DNA in Paulinella is from the chromatophore, compared with 11–14% from the chloroplast in plants. Similar to other chloroplasts, Paulinella provides specific proteins to the chromatophore using a specific targeting sequence. Because chromatophores are much younger compared to the canoncial chloroplasts, Paulinella chromatophora is studied to understand how early chloroplasts evolved.
Secondary and tertiary chloroplast lineages
Green algal derived chloroplasts
Green algae have been taken up by many groups in three or four separate events. Primarily, secondary chloroplasts derived from green algae are in the euglenids and chlorarachniophytes. They are also found in one lineage of dinoflagellates and possibly the ancestor of the CASH lineage (cryptomonads, alveolates, stramenopiles and haptophytes) Many green algal derived chloroplasts contain pyrenoids, but unlike chloroplasts in their green algal ancestors, storage product collects in granules outside the chloroplast.
Euglenophytes
The euglenophytes are a group of common flagellated protists that contain chloroplasts derived from a green alga. Euglenophytes are the only group outside Diaphoretickes that have chloroplasts without performing kleptoplasty. Euglenophyte chloroplasts have three membranes. It is thought that the membrane of the primary endosymbiont host was lost (e.g. the green algal membrane), leaving the two cyanobacterial membranes and the secondary host's phagosomal membrane. Euglenophyte chloroplasts have a pyrenoid and thylakoids stacked in groups of three. The carbon fixed through photosynthesis is stored in the form of paramylon, which is contained in membrane-bound granules in the cytoplasm of the euglenophyte.
Chlorarachniophytes
Chlorarachniophytes are a rare group of organisms that also contain chloroplasts derived from green algae, though their story is more complicated than that of the euglenophytes. The ancestor of chlorarachniophytes is thought to have been a eukaryote with a red algal derived chloroplast. It is then thought to have lost its first red algal chloroplast, and later engulfed a green alga, giving it its second, green algal derived chloroplast.
Chlorarachniophyte chloroplasts are bounded by four membranes, except near the cell membrane, where the chloroplast membranes fuse into a double membrane. Their thylakoids are arranged in loose stacks of three. Chlorarachniophytes have a form of polysaccharide called chrysolaminarin, which they store in the cytoplasm, often collected around the chloroplast pyrenoid, which bulges into the cytoplasm.
Chlorarachniophyte chloroplasts are notable because the green alga they are derived from has not been completely broken down—its nucleus still persists as a nucleomorph found between the second and third chloroplast membranes—the periplastid space, which corresponds to the green alga's cytoplasm.
Prasinophyte-derived chloroplast
Dinoflagellates in the genus Lepidodinium have lost their original peridinin chloroplast and replaced it with a green algal derived chloroplast (more specifically, a prasinophyte). Lepidodinium is the only dinoflagellate that has a chloroplast that's not from the rhodoplast lineage. The chloroplast is surrounded by two membranes and has no nucleomorph—all the nucleomorph genes have been transferred to the dinophyte nucleus. The endosymbiotic event that led to this chloroplast was serial secondary endosymbiosis rather than tertiary endosymbiosis—the endosymbiont was a green alga containing a primary chloroplast (making a secondary chloroplast).
Red algal derived chloroplasts
Secondary chloroplasts derived from red algae appear to have only been taken up only once, which then diversified into a large group called chromalveolates. Today they are found in the haptophytes, cryptomonads, heterokonts, dinoflagellates and apicomplexans (the CASH lineage). Red algal secondary chloroplasts usually contain chlorophyll c and are surrounded by four membranes.
Cryptophytes
Cryptophytes, or cryptomonads, are a group of algae that contain a red-algal derived chloroplast. Cryptophyte chloroplasts contain a nucleomorph that superficially resembles that of the chlorarachniophytes. Cryptophyte chloroplasts have four membranes. The outermost membrane is continuous with the rough endoplasmic reticulum. They synthesize ordinary starch, which is stored in granules found in the periplastid space—outside the original double membrane, in the place that corresponds to the ancestral red alga's cytoplasm. Inside cryptophyte chloroplasts is a pyrenoid and thylakoids in stacks of two. Cryptophyte chloroplasts do not have phycobilisomes, but they do have phycobilin pigments which they keep in the thylakoid space, rather than anchored on the outside of their thylakoid membranes.
Cryptophytes may have played a key role in the spreading of red algal based chloroplasts.
Haptophytes
Haptophytes are similar and closely related to cryptophytes or heterokontophytes. Their chloroplasts lack a nucleomorph, their thylakoids are in stacks of three, and they synthesize chrysolaminarin sugar, which are stored in granules completely outside of the chloroplast, in the cytoplasm of the haptophyte.
Stramenopiles (heterokontophytes)
The stramenopiles, also known as heterokontophytes, are a very large and diverse group of eukaryotes. It inlcludes Ochrophyta—which includes diatoms, brown algae (seaweeds), and golden algae (chrysophytes)— and Xanthophyceae (also called yellow-green algae).
Heterokont chloroplasts are very similar to haptophyte chloroplasts. They have a pyrenoid, triplet thylakoids, and, with some exceptions, four layer plastidic envelope with the outermost membrane connected to the endoplasmic reticulum. Like haptophytes, stramenopiles store sugar in chrysolaminarin granules in the cytoplasm. Stramenopile chloroplasts contain chlorophyll a and, with a few exceptions, chlorophyll c. They also have carotenoids which give them their many colors.
Apicomplexans, chromerids, and dinophytes
The alveolates are a major clade of unicellular eukaryotes of both autotrophic and heterotrophic members. Many members contain a red-algal derived plastid. One notable characteristic of this diverse group is the frequent loss of photosynthesis. However, a majority of these heterotrophs continue to process a non-photosynthetic plastid.
Apicomplexans
Apicomplexans are a group of alveolates. Like the helicosproidia, they're parasitic, and have a nonphotosynthetic chloroplast. They were once thought to be related to the helicosproidia, but it is now known that the helicosproida are green algae rather than part of the CASH lineage. The apicomplexans include Plasmodium, the malaria parasite. Many apicomplexans keep a vestigial red algal derived chloroplast called an apicoplast, which they inherited from their ancestors. Apicoplasts have lost all photosynthetic function, and contain no photosynthetic pigments or true thylakoids. They are bounded by four membranes, but the membranes are not connected to the endoplasmic reticulum. Other apicomplexans like Cryptosporidium have lost the chloroplast completely. Apicomplexans store their energy in amylopectin granules that are located in their cytoplasm, even though they are nonphotosynthetic.
The fact that apicomplexans still keep their nonphotosynthetic chloroplast around demonstrates how the chloroplast carries out important functions other than photosynthesis. Plant chloroplasts provide plant cells with many important things besides sugar, and apicoplasts are no different—they synthesize fatty acids, isopentenyl pyrophosphate, iron-sulfur clusters, and carry out part of the heme pathway. The most important apicoplast function is isopentenyl pyrophosphate synthesis—in fact, apicomplexans die when something interferes with this apicoplast function, and when apicomplexans are grown in an isopentenyl pyrophosphate-rich medium, they dump the organelle.
Chromerids
The Chromerida is a newly discovered group of algae from Australian corals which comprises some close photosynthetic relatives of the apicomplexans. The first member, Chromera velia, was discovered and first isolated in 2001. The discovery of Chromera velia with similar structure to the apicomplexans, provides an important link in the evolutionary history of the apicomplexans and dinophytes. Their plastids have four membranes, lack chlorophyll c and use the type II form of RuBisCO obtained from a horizontal transfer event.
Dinoflagellates
The dinoflagellates are yet another very large and diverse group, around half of which are at least partially photosynthetic (i.e. mixotrophic). Dinoflagellate chloroplasts have relatively complex history. Most dinoflagellate chloroplasts are secondary red algal derived chloroplasts. Many dinoflagellates have lost the chloroplast (becoming nonphotosynthetic), some of these have replaced it though tertiary endosymbiosis. Others replaced their original chloroplast with a green algal derived chloroplast. The peridinin chloroplast is thought to be the dinophytes' "original" chloroplast, which has been lost, reduced, replaced, or has company in several other dinophyte lineages.
The most common dinophyte chloroplast is the peridinin-type chloroplast, characterized by the carotenoid pigment peridinin in their chloroplasts, along with chlorophyll a and chlorophyll c2. Peridinin is not found in any other group of chloroplasts. The peridinin chloroplast is bounded by three membranes (occasionally two), having lost the red algal endosymbiont's original cell membrane. The outermost membrane is not connected to the endoplasmic reticulum. They contain a pyrenoid, and have triplet-stacked thylakoids. Starch is found outside the chloroplast. Peridinin chloroplasts also have DNA that is highly reduced and fragmented into many small circles. Most of the genome has migrated to the nucleus, and only critical photosynthesis-related genes remain in the chloroplast.
Most dinophyte chloroplasts contain form II RuBisCO, at least the photosynthetic pigments chlorophyll a, chlorophyll c2, beta-carotene, and at least one dinophyte-unique xanthophyll (peridinin, dinoxanthin, or diadinoxanthin), giving many a golden-brown color. All dinophytes store starch in their cytoplasm, and most have chloroplasts with thylakoids arranged in stacks of three.
Tertiary chloroplasts (haptophyte-derived)
The fucoxanthin dinophyte lineages (including Karlodinium and Karenia) lost their original red algal derived chloroplast, and replaced it with a new chloroplast derived from a haptophyte endosymbiont, making these tertiary plastids. Karlodinium and Karenia probably took up different heterokontophytes. Because the haptophyte chloroplast has four membranes, tertiary endosymbiosis would be expected to create a six membraned chloroplast, adding the haptophyte's cell membrane and the dinophyte's phagosomal vacuole. However, the haptophyte was heavily reduced, stripped of a few membranes and its nucleus, leaving only its chloroplast (with its original double membrane), and possibly one or two additional membranes around it.
Fucoxanthin-containing chloroplasts are characterized by having the pigment fucoxanthin (actually 19′-hexanoyloxy-fucoxanthin and/or 19′-butanoyloxy-fucoxanthin) and no peridinin. Fucoxanthin is also found in haptophyte chloroplasts, providing evidence of ancestry.
"Dinotoms" diatom-derived dinophyte chloroplasts
Some dinophytes, like Kryptoperidinium and Durinskia, have a diatom (heterokontophyte)-derived chloroplast. These chloroplasts are bounded by up to five membranes, (depending on whether the entire diatom endosymbiont is counted as the chloroplast, or just the red algal derived chloroplast inside it). The diatom endosymbiont has been reduced relatively little—it still retains its original mitochondria, and has endoplasmic reticulum, ribosomes, a nucleus, and of course, red algal derived chloroplasts—practically a complete cell, all inside the host's endoplasmic reticulum lumen. However the diatom endosymbiont can't store its own food—its storage polysaccharide is found in granules in the dinophyte host's cytoplasm instead. The diatom endosymbiont's nucleus is present, but it probably can't be called a nucleomorph because it shows no sign of genome reduction, and might have even been expanded. Diatoms have been engulfed by dinoflagellates at least three times.
The diatom endosymbiont is bounded by a single membrane, inside it are chloroplasts with four membranes. Like the diatom endosymbiont's diatom ancestor, the chloroplasts have triplet thylakoids and pyrenoids.
In some of these genera, the diatom endosymbiont's chloroplasts aren't the only chloroplasts in the dinophyte. The original three-membraned peridinin chloroplast is still around, converted to an eyespot.
Kleptoplasty
In some groups of mixotrophic protists, like some dinoflagellates (e.g. Dinophysis), chloroplasts are separated from a captured alga and used temporarily. These klepto chloroplasts may only have a lifetime of a few days and are then replaced.
Cryptophyte-derived dinophyte chloroplast
Members of the genus Dinophysis have a phycobilin-containing chloroplast taken from a cryptophyte. However, the cryptophyte is not an endosymbiont—only the chloroplast seems to have been taken, and the chloroplast has been stripped of its nucleomorph and outermost two membranes, leaving just a two-membraned chloroplast. Cryptophyte chloroplasts require their nucleomorph to maintain themselves, and Dinophysis species grown in cell culture alone cannot survive, so it is possible (but not confirmed) that the Dinophysis chloroplast is a kleptoplast—if so, Dinophysis chloroplasts wear out and Dinophysis species must continually engulf cryptophytes to obtain new chloroplasts to replace the old ones.
Chloroplast DNA
Chloroplasts, like other endosymbiotic organelles, contain a genome separate from that in the cell nucleus. The existence of chloroplast DNA (cpDNA) was identified biochemically in 1959, and confirmed by electron microscopy in 1962. The discoveries that the chloroplast contains ribosomes and performs protein synthesis revealed that the chloroplast is genetically semi-autonomous. Chloroplast DNA was first sequenced in 1986. Since then, hundreds of chloroplast genomes from various species have been sequenced, but they are mostly those of land plants and green algae—glaucophytes, red algae, and other algal groups are extremely underrepresented, potentially introducing some bias in views of "typical" chloroplast DNA structure and content.
Molecular structure
With few exceptions, most chloroplasts have their entire chloroplast genome combined into a single large circular DNA molecule, typically 120,000–170,000 base pairs long and a mass of about 80–130 million daltons. While chloroplast genomes can almost always be assembled into a circular map, the physical DNA molecules inside cells take on a variety of linear and branching forms. New chloroplasts may contain up to 100 copies of their genome, though the number of copies decreases to about 15–20 as the chloroplasts age.
Chloroplast DNA is usually condensed into nucleoids, which can contain multiple copies of the chloroplast genome. Many nucleoids can be found in each chloroplast. In primitive red algae, the chloroplast DNA nucleoids are clustered in the center of the chloroplast, while in green plants and green algae, the nucleoids are dispersed throughout the stroma. Chloroplast DNA is not associated with true histones, proteins that are used to pack DNA molecules tightly in eukaryote nuclei. Though in red algae, similar proteins tightly pack each chloroplast DNA ring in a nucleoid.
Many chloroplast genomes contain two inverted repeats, which separate a long single copy section (LSC) from a short single copy section (SSC). A given pair of inverted repeats are rarely identical, but they are always very similar to each other, apparently resulting from concerted evolution. The inverted repeats vary wildly in length, ranging from 4,000 to 25,000 base pairs long each and containing as few as four or as many as over 150 genes. The inverted repeat regions are highly conserved in land plants, and accumulate few mutations.
Similar inverted repeats exist in the genomes of cyanobacteria and the other two chloroplast lineages (glaucophyta and rhodophyceae), suggesting that they predate the chloroplast. Some chloroplast genomes have since lost or flipped the inverted repeats (making them direct repeats). It is possible that the inverted repeats help stabilize the rest of the chloroplast genome, as chloroplast genomes which have lost some of the inverted repeat segments tend to get rearranged more.
DNA repair and replication
In chloroplasts of the moss Physcomitrella patens, the DNA mismatch repair protein Msh1 interacts with the recombinational repair proteins RecA and RecG to maintain chloroplast genome stability. In chloroplasts of the plant Arabidopsis thaliana the RecA protein maintains the integrity of the chloroplast's DNA by a process that likely involves the recombinational repair of DNA damage.
The mechanism for chloroplast DNA (cpDNA) replication has not been conclusively determined, but two main models have been proposed. Scientists have attempted to observe chloroplast replication via electron microscopy since the 1970s. The results of the microscopy experiments led to the idea that chloroplast DNA replicates using a double displacement loop (D-loop). As the D-loop moves through the circular DNA, it adopts a theta intermediary form, also known as a Cairns replication intermediate, and completes replication with a rolling circle mechanism. Transcription starts at specific points of origin. Multiple replication forks open up, allowing replication machinery to transcribe the DNA. As replication continues, the forks grow and eventually converge. The new cpDNA structures separate, creating daughter cpDNA chromosomes.
In addition to the early microscopy experiments, this model is also supported by the amounts of deamination seen in cpDNA. Deamination occurs when an amino group is lost and is a mutation that often results in base changes. When adenine is deaminated, it becomes hypoxanthine. Hypoxanthine can bind to cytosine, and when the XC base pair is replicated, it becomes a GC (thus, an A → G base change).
In cpDNA, there are several A → G deamination gradients. DNA becomes susceptible to deamination events when it is single stranded. When replication forks form, the strand not being copied is single stranded, and thus at risk for A → G deamination. Therefore, gradients in deamination indicate that replication forks were most likely present and the direction that they initially opened (the highest gradient is most likely nearest the start site because it was single stranded for the longest amount of time). This mechanism is still the leading theory today; however, a second theory suggests that most cpDNA is actually linear and replicates through homologous recombination. It further contends that only a minority of the genetic material is kept in circular chromosomes while the rest is in branched, linear, or other complex structures.
One of competing model for cpDNA replication asserts that most cpDNA is linear and participates in homologous recombination and replication structures similar to the linear and circular DNA structures of bacteriophage T4. It has been established that some plants have linear cpDNA, such as maize, and that more species still contain complex structures that scientists do not yet understand. When the original experiments on cpDNA were performed, scientists did notice linear structures; however, they attributed these linear forms to broken circles. If the branched and complex structures seen in cpDNA experiments are real and not artifacts of concatenated circular DNA or broken circles, then a D-loop mechanism of replication is insufficient to explain how those structures would replicate. At the same time, homologous recombination does not expand the multiple A --> G gradients seen in plastomes. Because of the failure to explain the deamination gradient as well as the numerous plant species that have been shown to have circular cpDNA, the predominant theory continues to hold that most cpDNA is circular and most likely replicates via a D loop mechanism.
Gene content and protein synthesis
The ancestral cyanobacteria that led to chloroplasts probably had a genome that contained over 3000 genes, but only approximately 100 genes remain in contemporary chloroplast genomes. These genes code for a variety of things, mostly to do with the protein pipeline and photosynthesis. As in prokaryotes, genes in chloroplast DNA are organized into operons. Unlike prokaryotic DNA molecules, chloroplast DNA molecules contain introns (plant mitochondrial DNAs do too, but not human mtDNAs).
Among land plants, the contents of the chloroplast genome are fairly similar.
Chloroplast genome reduction and gene transfer
Over time, many parts of the chloroplast genome were transferred to the nuclear genome of the host, a process called endosymbiotic gene transfer. As a result, the chloroplast genome is heavily reduced compared to that of free-living cyanobacteria. Chloroplasts may contain 60–100 genes whereas cyanobacteria often have more than 1500 genes in their genome. Recently, a plastid without a genome was found, demonstrating chloroplasts can lose their genome during endosymbiotic the gene transfer process.
Endosymbiotic gene transfer is how we know about the lost chloroplasts in many CASH lineages. Even if a chloroplast is eventually lost, the genes it donated to the former host's nucleus persist, providing evidence for the lost chloroplast's existence. For example, while diatoms (a heterokontophyte) now have a red algal derived chloroplast, the presence of many green algal genes in the diatom nucleus provide evidence that the diatom ancestor had a green algal derived chloroplast at some point, which was subsequently replaced by the red chloroplast.
In land plants, some 11–14% of the DNA in their nuclei can be traced back to the chloroplast, up to 18% in Arabidopsis, corresponding to about 4,500 protein-coding genes. There have been a few recent transfers of genes from the chloroplast DNA to the nuclear genome in land plants.
Of the approximately 3000 proteins found in chloroplasts, some 95% of them are encoded by nuclear genes. Many of the chloroplast's protein complexes consist of subunits from both the chloroplast genome and the host's nuclear genome. As a result, protein synthesis must be coordinated between the chloroplast and the nucleus. The chloroplast is mostly under nuclear control, though chloroplasts can also give out signals regulating gene expression in the nucleus, called retrograde signaling. Recent research indicates that parts of the retrograde signaling network once considered characteristic for land plants emerged already in an algal progenitor, integrating into co-expressed cohorts of genes in the closest algal relatives of land plants.
Protein synthesis
Protein synthesis within chloroplasts relies on two RNA polymerases. One is coded by the chloroplast DNA, the other is of nuclear origin. The two RNA polymerases may recognize and bind to different kinds of promoters within the chloroplast genome. The ribosomes in chloroplasts are similar to bacterial ribosomes.
Protein targeting and import
Because so many chloroplast genes have been moved to the nucleus, many proteins that would originally have been translated in the chloroplast are now synthesized in the cytoplasm of the plant cell. These proteins must be directed back to the chloroplast, and imported through at least two chloroplast membranes.
Curiously, around half of the protein products of transferred genes aren't even targeted back to the chloroplast. Many became exaptations, taking on new functions like participating in cell division, protein routing, and even disease resistance. A few chloroplast genes found new homes in the mitochondrial genome—most became nonfunctional pseudogenes, though a few tRNA genes still work in the mitochondrion. Some transferred chloroplast DNA protein products get directed to the secretory pathway, though many secondary plastids are bounded by an outermost membrane derived from the host's cell membrane, and therefore topologically outside of the cell because to reach the chloroplast from the cytosol, the cell membrane must be crossed, which signifies entrance into the extracellular space. In those cases, chloroplast-targeted proteins do initially travel along the secretory pathway.
Because the cell acquiring a chloroplast already had mitochondria (and peroxisomes, and a cell membrane for secretion), the new chloroplast host had to develop a unique protein targeting system to avoid having chloroplast proteins being sent to the wrong organelle.
In most, but not all cases, nuclear-encoded chloroplast proteins are translated with a cleavable transit peptide that's added to the N-terminus of the protein precursor. Sometimes the transit sequence is found on the C-terminus of the protein, or within the functional part of the protein.
Transport proteins and membrane translocons
After a chloroplast polypeptide is synthesized on a ribosome in the cytosol, an enzyme specific to chloroplast proteins phosphorylates, or adds a phosphate group to many (but not all) of them in their transit sequences.
Phosphorylation helps many proteins bind the polypeptide, keeping it from folding prematurely. This is important because it prevents chloroplast proteins from assuming their active form and carrying out their chloroplast functions in the wrong place—the cytosol. At the same time, they have to keep just enough shape so that they can be recognized by the chloroplast. These proteins also help the polypeptide get imported into the chloroplast.
From here, chloroplast proteins bound for the stroma must pass through two protein complexes—the TOC complex, or translocon on the outer chloroplast membrane, and the TIC translocon, or translocon on the inner chloroplast membrane translocon. Chloroplast polypeptide chains probably often travel through the two complexes at the same time, but the TIC complex can also retrieve preproteins lost in the intermembrane space.
Structure
In land plants, chloroplasts are generally lens-shaped, 3–10 μm in diameter and 1–3 μm thick. Corn seedling chloroplasts are ≈20 μm3 in volume. Greater diversity in chloroplast shapes exists among the algae, which often contain a single chloroplast that can be shaped like a net (e.g., Oedogonium), a cup (e.g., Chlamydomonas), a ribbon-like spiral around the edges of the cell (e.g., Spirogyra), or slightly twisted bands at the cell edges (e.g., Sirogonium). Some algae have two chloroplasts in each cell; they are star-shaped in Zygnema, or may follow the shape of half the cell in order Desmidiales. In some algae, the chloroplast takes up most of the cell, with pockets for the nucleus and other organelles, for example, some species of Chlorella have a cup-shaped chloroplast that occupies much of the cell.
All chloroplasts have at least three membrane systems—the outer chloroplast membrane, the inner chloroplast membrane, and the thylakoid system. The two innermost lipid-bilayer membranes that surround all chloroplasts correspond to the outer and inner membranes of the ancestral cyanobacterium's gram negative cell wall, and not the phagosomal membrane from the host, which was probably lost. Chloroplasts that are the product of secondary endosymbiosis may have additional membranes surrounding these three. Inside the outer and inner chloroplast membranes is the chloroplast stroma, a semi-gel-like fluid that makes up much of a chloroplast's volume, and in which the thylakoid system floats.
There are some common misconceptions about the outer and inner chloroplast membranes. The fact that chloroplasts are surrounded by a double membrane is often cited as evidence that they are the descendants of endosymbiotic cyanobacteria. This is often interpreted as meaning the outer chloroplast membrane is the product of the host's cell membrane infolding to form a vesicle to surround the ancestral cyanobacterium—which is not true—both chloroplast membranes are homologous to the cyanobacterium's original double membranes.
The chloroplast double membrane is also often compared to the mitochondrial double membrane. This is not a valid comparison—the inner mitochondria membrane is used to run proton pumps and carry out oxidative phosphorylation across to generate ATP energy. The only chloroplast structure that can considered analogous to it is the internal thylakoid system. Even so, in terms of "in-out", the direction of chloroplast H ion flow is in the opposite direction compared to oxidative phosphorylation in mitochondria. In addition, in terms of function, the inner chloroplast membrane, which regulates metabolite passage and synthesizes some materials, has no counterpart in the mitochondrion.
Outer chloroplast membrane
The outer chloroplast membrane is a semi-porous membrane that small molecules and ions can easily diffuse across. However, it is not permeable to larger proteins, so chloroplast polypeptides being synthesized in the cell cytoplasm must be transported across the outer chloroplast membrane by the TOC complex, or translocon on the outer chloroplast membrane.
The chloroplast membranes sometimes protrude out into the cytoplasm, forming a stromule, or stroma-containing tubule. Stromules are very rare in chloroplasts, and are much more common in other plastids like chromoplasts and amyloplasts in petals and roots, respectively. They may exist to increase the chloroplast's surface area for cross-membrane transport, because they are often branched and tangled with the endoplasmic reticulum. When they were first observed in 1962, some plant biologists dismissed the structures as artifactual, claiming that stromules were just oddly shaped chloroplasts with constricted regions or dividing chloroplasts. However, there is a growing body of evidence that stromules are functional, integral features of plant cell plastids, not merely artifacts.
Intermembrane space and peptidoglycan wall
Usually, a thin intermembrane space about 10–20 nanometers thick exists between the outer and inner chloroplast membranes.
Glaucophyte algal chloroplasts have a peptidoglycan layer between the chloroplast membranes. It corresponds to the peptidoglycan cell wall of their cyanobacterial ancestors, which is located between their two cell membranes. These chloroplasts are called muroplasts (from Latin "mura", meaning "wall"). Other chloroplasts were assumed to have lost the cyanobacterial wall, leaving an intermembrane space between the two chloroplast envelope membranes, but has since been found also in moss, lycophytes and ferns.
Inner chloroplast membrane
The inner chloroplast membrane borders the stroma and regulates passage of materials in and out of the chloroplast. After passing through the TOC complex in the outer chloroplast membrane, polypeptides must pass through the TIC complex (translocon on the inner chloroplast membrane) which is located in the inner chloroplast membrane.
In addition to regulating the passage of materials, the inner chloroplast membrane is where fatty acids, lipids, and carotenoids are synthesized.
Peripheral reticulum
Some chloroplasts contain a structure called the chloroplast peripheral reticulum. It is often found in the chloroplasts of plants, though it has also been found in some angiosperms, and even some gymnosperms. The chloroplast peripheral reticulum consists of a maze of membranous tubes and vesicles continuous with the inner chloroplast membrane that extends into the internal stromal fluid of the chloroplast. Its purpose is thought to be to increase the chloroplast's surface area for cross-membrane transport between its stroma and the cell cytoplasm. The small vesicles sometimes observed may serve as transport vesicles to shuttle stuff between the thylakoids and intermembrane space.
Stroma
The protein-rich, alkaline, aqueous fluid within the inner chloroplast membrane and outside of the thylakoid space is called the stroma, which corresponds to the cytosol of the original cyanobacterium. Nucleoids of chloroplast DNA, chloroplast ribosomes, the thylakoid system with plastoglobuli, starch granules, and many proteins can be found floating around in it. The Calvin cycle, which fixes CO into G3P takes place in the stroma.
Chloroplast ribosomes
Chloroplasts have their own ribosomes, which they use to synthesize a small fraction of their proteins. Chloroplast ribosomes are about two-thirds the size of cytoplasmic ribosomes (around 17 nm vs 25 nm). They take mRNAs transcribed from the chloroplast DNA and translate them into protein. While similar to bacterial ribosomes, chloroplast translation is more complex than in bacteria, so chloroplast ribosomes include some chloroplast-unique features.
Small subunit ribosomal RNAs in several Chlorophyta and euglenid chloroplasts lack motifs for Shine-Dalgarno sequence recognition, which is considered essential for translation initiation in most chloroplasts and prokaryotes. Such loss is also rarely observed in other plastids and prokaryotes. An additional 4.5S rRNA with homology to the 3' tail of 23S is found in "higher" plants.
Plastoglobuli
Plastoglobuli (singular plastoglobulus, sometimes spelled plastoglobule(s)), are spherical bubbles of lipids and proteins about 45–60 nanometers across. They are surrounded by a lipid monolayer. Plastoglobuli are found in all chloroplasts, but become more common when the chloroplast is under oxidative stress, or when it ages and transitions into a gerontoplast. Plastoglobuli also exhibit a greater size variation under these conditions. They are also common in etioplasts, but decrease in number as the etioplasts mature into chloroplasts.
Plastoglobuli contain both structural proteins and enzymes involved in lipid synthesis and metabolism. They contain many types of lipids including plastoquinone, vitamin E, carotenoids and chlorophylls.
Plastoglobuli were once thought to be free-floating in the stroma, but it is now thought that they are permanently attached either to a thylakoid or to another plastoglobulus attached to a thylakoid, a configuration that allows a plastoglobulus to exchange its contents with the thylakoid network. In normal green chloroplasts, the vast majority of plastoglobuli occur singularly, attached directly to their parent thylakoid. In old or stressed chloroplasts, plastoglobuli tend to occur in linked groups or chains, still always anchored to a thylakoid.
Plastoglobuli form when a bubble appears between the layers of the lipid bilayer of the thylakoid membrane, or bud from existing plastoglobuli—though they never detach and float off into the stroma. Practically all plastoglobuli form on or near the highly curved edges of the thylakoid disks or sheets. They are also more common on stromal thylakoids than on granal ones.
Starch granules
Starch granules are very common in chloroplasts, typically taking up 15% of the organelle's volume, though in some other plastids like amyloplasts, they can be big enough to distort the shape of the organelle. Starch granules are simply accumulations of starch in the stroma, and are not bounded by a membrane.
Starch granules appear and grow throughout the day, as the chloroplast synthesizes sugars, and are consumed at night to fuel respiration and continue sugar export into the phloem, though in mature chloroplasts, it is rare for a starch granule to be completely consumed or for a new granule to accumulate.
Starch granules vary in composition and location across different chloroplast lineages. In red algae, starch granules are found in the cytoplasm rather than in the chloroplast. In plants, mesophyll chloroplasts, which do not synthesize sugars, lack starch granules.
RuBisCO
The chloroplast stroma contains many proteins, though the most common and important is RuBisCO, which is probably also the most abundant protein on the planet. RuBisCO is the enzyme that fixes CO into sugar molecules. In plants, RuBisCO is abundant in all chloroplasts, though in plants, it is confined to the bundle sheath chloroplasts, where the Calvin cycle is carried out in plants.
Pyrenoids
The chloroplasts of some hornworts and algae contain structures called pyrenoids. They are not found in higher plants. Pyrenoids are roughly spherical and highly refractive bodies which are a site of starch accumulation in plants that contain them. They consist of a matrix opaque to electrons, surrounded by two hemispherical starch plates. The starch is accumulated as the pyrenoids mature. In algae with carbon concentrating mechanisms, the enzyme RuBisCO is found in the pyrenoids. Starch can also accumulate around the pyrenoids when CO2 is scarce. Pyrenoids can divide to form new pyrenoids, or be produced "de novo".
Thylakoid system
Thylakoids (sometimes spelled thylakoïds), are small interconnected sacks which contain the membranes that the light reactions of photosynthesis take place on. The word thylakoid comes from the Greek word thylakos which means "sack".
Suspended within the chloroplast stroma is the thylakoid system, a highly dynamic collection of membranous sacks called thylakoids where chlorophyll is found and the light reactions of photosynthesis happen.
In most vascular plant chloroplasts, the thylakoids are arranged in stacks called grana, though in certain plant chloroplasts and some algal chloroplasts, the thylakoids are free floating.
Thylakoid structure
Using a light microscope, it is just barely possible to see tiny green granules—which were named grana. With electron microscopy, it became possible to see the thylakoid system in more detail, revealing it to consist of stacks of flat thylakoids which made up the grana, and long interconnecting stromal thylakoids which linked different grana.
In the transmission electron microscope, thylakoid membranes appear as alternating light-and-dark bands, 8.5 nanometers thick.
The three-dimensional structure of the thylakoid membrane system has been disputed. Many models have been proposed, the most prevalent being the helical model, in which granum stacks of thylakoids are wrapped by helical stromal thylakoids. Another model known as the 'bifurcation model', which was based on the first electron tomography study of plant thylakoid membranes, depicts the stromal membranes as wide lamellar sheets perpendicular to the grana columns which bifurcates into multiple parallel discs forming the granum-stroma assembly. The helical model was supported by several additional works, but ultimately it was determined in 2019 that features from both the helical and bifurcation models are consolidated by newly discovered left-handed helical membrane junctions. Likely for ease, the thylakoid system is still commonly depicted by older "hub and spoke" models where the grana are connected to each other by tubes of stromal thylakoids.
Grana consist of a stacks of flattened circular granal thylakoids that resemble pancakes. Each granum can contain anywhere from two to a hundred thylakoids, though grana with 10–20 thylakoids are most common. Wrapped around the grana are multiple parallel right-handed helical stromal thylakoids, also known as frets or lamellar thylakoids. The helices ascend at an angle of ~20°, connecting to each granal thylakoid at a bridge-like slit junction.
The stroma lamellae extend as large sheets perpendicular to the grana columns. These sheets are connected to the right-handed helices either directly or through bifurcations that form left-handed helical membrane surfaces. The left-handed helical surfaces have a similar tilt angle to the right-handed helices (~20°), but ¼ the pitch. Approximately 4 left-handed helical junctions are present per granum, resulting in a pitch-balanced array of right- and left-handed helical membrane surfaces of different radii and pitch that consolidate the network with minimal surface and bending energies. While different parts of the thylakoid system contain different membrane proteins, the thylakoid membranes are continuous and the thylakoid space they enclose form a single continuous labyrinth.
Thylakoid composition
Embedded in the thylakoid membranes are important protein complexes which carry out the light reactions of photosynthesis. Photosystem II and photosystem I contain light-harvesting complexes with chlorophyll and carotenoids that absorb light energy and use it to energize electrons. Molecules in the thylakoid membrane use the energized electrons to pump hydrogen ions into the thylakoid space, decreasing the pH and turning it acidic. ATP synthase is a large protein complex that harnesses the concentration gradient of the hydrogen ions in the thylakoid space to generate ATP energy as the hydrogen ions flow back out into the stroma—much like a dam turbine.
There are two types of thylakoids—granal thylakoids, which are arranged in grana, and stromal thylakoids, which are in contact with the stroma. Granal thylakoids are pancake-shaped circular disks about 300–600 nanometers in diameter. Stromal thylakoids are helicoid sheets that spiral around grana. The flat tops and bottoms of granal thylakoids contain only the relatively flat photosystem II protein complex. This allows them to stack tightly, forming grana with many layers of tightly appressed membrane, called granal membrane, increasing stability and surface area for light capture.
In contrast, photosystem I and ATP synthase are large protein complexes which jut out into the stroma. They can't fit in the appressed granal membranes, and so are found in the stromal thylakoid membrane—the edges of the granal thylakoid disks and the stromal thylakoids. These large protein complexes may act as spacers between the sheets of stromal thylakoids.
The number of thylakoids and the total thylakoid area of a chloroplast is influenced by light exposure. Shaded chloroplasts contain larger and more grana with more thylakoid membrane area than chloroplasts exposed to bright light, which have smaller and fewer grana and less thylakoid area. Thylakoid extent can change within minutes of light exposure or removal.
Pigments and chloroplast colors
Inside the photosystems embedded in chloroplast thylakoid membranes are various photosynthetic pigments, which absorb and transfer light energy. The types of pigments found are different in various groups of chloroplasts, and are responsible for a wide variety of chloroplast colorations. Other plastid types, such as the leucoplast and the chromoplast, contain little chlorophyll and do not carry out photosynthesis.
Paper chromatography of some spinach leaf extract shows the various pigments present in their chloroplasts.
Xanthophylls
Chlorophyll a
Chlorophyll b
Chlorophylls
Chlorophyll a is found in all chloroplasts, as well as their cyanobacterial ancestors. Chlorophyll a is a blue-green pigment partially responsible for giving most cyanobacteria and chloroplasts their color. Other forms of chlorophyll exist, such as the accessory pigments chlorophyll b, chlorophyll c, chlorophyll d, and chlorophyll f.
Chlorophyll b is an olive green pigment found only in the chloroplasts of plants, green algae, any secondary chloroplasts obtained through the secondary endosymbiosis of a green alga, and a few cyanobacteria. It is the chlorophylls a and b together that make most plant and green algal chloroplasts green.
Chlorophyll c is mainly found in secondary endosymbiotic chloroplasts that originated from a red alga, although it is not found in chloroplasts of red algae themselves. Chlorophyll c is also found in some green algae and cyanobacteria.
Chlorophylls d and f are pigments found only in some cyanobacteria.
Carotenoids
In addition to chlorophylls, another group of yellow–orange pigments called carotenoids are also found in the photosystems. There are about thirty photosynthetic carotenoids. They help transfer and dissipate excess energy, and their bright colors sometimes override the chlorophyll green, like during the fall, when the leaves of some land plants change color. β-carotene is a bright red-orange carotenoid found in nearly all chloroplasts, like chlorophyll a. Xanthophylls, especially the orange-red zeaxanthin, are also common. Many other forms of carotenoids exist that are only found in certain groups of chloroplasts.
Phycobilins
Phycobilins are a third group of pigments found in cyanobacteria, and glaucophyte, red algal, and cryptophyte chloroplasts. Phycobilins come in all colors, though phycoerytherin is one of the pigments that makes many red algae red. Phycobilins often organize into relatively large protein complexes about 40 nanometers across called phycobilisomes. Like photosystem I and ATP synthase, phycobilisomes jut into the stroma, preventing thylakoid stacking in red algal chloroplasts. Cryptophyte chloroplasts and some cyanobacteria don't have their phycobilin pigments organized into phycobilisomes, and keep them in their thylakoid space instead.
Specialized chloroplasts in plants
To fix carbon dioxide into sugar molecules in the process of photosynthesis, chloroplasts use an enzyme called RuBisCO. RuBisCO has trouble distinguishing between carbon dioxide and oxygen, so at high oxygen concentrations, RuBisCO starts accidentally adding oxygen to sugar precursors. This has the result of ATP energy being wasted and being released, all with no sugar being produced. This is a big problem, since O is produced by the initial light reactions of photosynthesis, causing issues down the line in the Calvin cycle which uses RuBisCO.
plants evolved a way to solve this—by spatially separating the light reactions and the Calvin cycle. The light reactions, which store light energy in ATP and NADPH, are done in the mesophyll cells of a leaf. The Calvin cycle, which uses the stored energy to make sugar using RuBisCO, is done in the bundle sheath cells, a layer of cells surrounding a vein in a leaf.
As a result, chloroplasts in mesophyll cells and bundle sheath cells are specialized for each stage of photosynthesis. In mesophyll cells, chloroplasts are specialized for the light reactions, so they lack RuBisCO, and have normal grana and thylakoids, which they use to make ATP and NADPH, as well as oxygen. They store in a four-carbon compound, which is why the process is called photosynthesis. The four-carbon compound is then transported to the bundle sheath chloroplasts, where it drops off and returns to the mesophyll. Bundle sheath chloroplasts do not carry out the light reactions, preventing oxygen from building up in them and disrupting RuBisCO activity. Because of this, they lack thylakoids organized into grana stacks—though bundle sheath chloroplasts still have free-floating thylakoids in the stroma where they still carry out cyclic electron flow, a light-driven method of synthesizing ATP to power the Calvin cycle without generating oxygen. They lack photosystem II, and only have photosystem I—the only protein complex needed for cyclic electron flow. Because the job of bundle sheath chloroplasts is to carry out the Calvin cycle and make sugar, they often contain large starch grains.
Both types of chloroplast contain large amounts of chloroplast peripheral reticulum, which they use to get more surface area to transport stuff in and out of them. Mesophyll chloroplasts have a little more peripheral reticulum than bundle sheath chloroplasts.
Function and chemistry
Guard cell chloroplasts
Unlike most epidermal cells, the guard cells of plant stomata contain relatively well-developed chloroplasts. However, exactly what they do is controversial.
Plant innate immunity
Plants lack specialized immune cells—all plant cells participate in the plant immune response. Chloroplasts, along with the nucleus, cell membrane, and endoplasmic reticulum, are key players in pathogen defense. Due to its role in a plant cell's immune response, pathogens frequently target the chloroplast.
Plants have two main immune responses—the hypersensitive response, in which infected cells seal themselves off and undergo programmed cell death, and systemic acquired resistance, where infected cells release signals warning the rest of the plant of a pathogen's presence.
Chloroplasts stimulate both responses by purposely damaging their photosynthetic system, producing reactive oxygen species. High levels of reactive oxygen species will cause the hypersensitive response. The reactive oxygen species also directly kill any pathogens within the cell. Lower levels of reactive oxygen species initiate systemic acquired resistance, triggering defense-molecule production in the rest of the plant.
In some plants, chloroplasts are known to move closer to the infection site and the nucleus during an infection.
Chloroplasts can serve as cellular sensors. After detecting stress in a cell, which might be due to a pathogen, chloroplasts begin producing molecules like salicylic acid, jasmonic acid, nitric oxide and reactive oxygen species which can serve as defense-signals. As cellular signals, reactive oxygen species are unstable molecules, so they probably don't leave the chloroplast, but instead pass on their signal to an unknown second messenger molecule. All these molecules initiate retrograde signaling—signals from the chloroplast that regulate gene expression in the nucleus.
In addition to defense signaling, chloroplasts, with the help of the peroxisomes, help synthesize an important defense molecule, jasmonate. Chloroplasts synthesize all the fatty acids in a plant cell—linoleic acid, a fatty acid, is a precursor to jasmonate.
Photosynthesis
One of the main functions of the chloroplast is its role in photosynthesis, the process by which light is transformed into chemical energy, to subsequently produce food in the form of sugars. Water (H2O) and carbon dioxide (CO2) are used in photosynthesis, and sugar and oxygen (O2) are made, using light energy. Photosynthesis is divided into two stages—the light reactions, where water is split to produce oxygen, and the dark reactions, or Calvin cycle, which builds sugar molecules from carbon dioxide. The two phases are linked by the energy carriers adenosine triphosphate (ATP) and nicotinamide adenine dinucleotide phosphate (NADP+).
Light reactions
The light reactions take place on the thylakoid membranes. They take light energy and store it in NADPH, a form of NADP+, and ATP to fuel the dark reactions.
Energy carriers
ATP is the phosphorylated version of adenosine diphosphate (ADP), which stores energy in a cell and powers most cellular activities. ATP is the energized form, while ADP is the (partially) depleted form. NADP+ is an electron carrier which ferries high energy electrons. In the light reactions, it gets reduced, meaning it picks up electrons, becoming NADPH.
Photophosphorylation
Like mitochondria, chloroplasts use the potential energy stored in an H+, or hydrogen ion, gradient to generate ATP energy. The two photosystems capture light energy to energize electrons taken from water, and release them down an electron transport chain. The molecules between the photosystems harness the electrons' energy to pump hydrogen ions into the thylakoid space, creating a concentration gradient, with more hydrogen ions (up to a thousand times as many) inside the thylakoid system than in the stroma. The hydrogen ions in the thylakoid space then diffuse back down their concentration gradient, flowing back out into the stroma through ATP synthase. ATP synthase uses the energy from the flowing hydrogen ions to phosphorylate adenosine diphosphate into adenosine triphosphate, or ATP. Because chloroplast ATP synthase projects out into the stroma, the ATP is synthesized there, in position to be used in the dark reactions.
NADP+ reduction
Electrons are often removed from the electron transport chains to charge NADP+ with electrons, reducing it to NADPH. Like ATP synthase, ferredoxin-NADP+ reductase, the enzyme that reduces NADP+, releases the NADPH it makes into the stroma, right where it is needed for the dark reactions.
Because NADP+ reduction removes electrons from the electron transport chains, they must be replaced—the job of photosystem II, which splits water molecules (H2O) to obtain the electrons from its hydrogen atoms.
Cyclic photophosphorylation
While photosystem II photolyzes water to obtain and energize new electrons, photosystem I simply reenergizes depleted electrons at the end of an electron transport chain. Normally, the reenergized electrons are taken by NADP+, though sometimes they can flow back down more H+-pumping electron transport chains to transport more hydrogen ions into the thylakoid space to generate more ATP. This is termed cyclic photophosphorylation because the electrons are recycled. Cyclic photophosphorylation is common in plants, which need more ATP than NADPH.
Dark reactions
The Calvin cycle, also known as the dark reactions, is a series of biochemical reactions that fixes CO2 into G3P sugar molecules and uses the energy and electrons from the ATP and NADPH made in the light reactions. The Calvin cycle takes place in the stroma of the chloroplast.
While named "the dark reactions", in most plants, they take place in the light, since the dark reactions are dependent on the products of the light reactions.
Carbon fixation and G3P synthesis
The Calvin cycle starts by using the enzyme RuBisCO to fix CO2 into five-carbon Ribulose bisphosphate (RuBP) molecules. The result is unstable six-carbon molecules that immediately break down into three-carbon molecules called 3-phosphoglyceric acid, or 3-PGA.
The ATP and NADPH made in the light reactions is used to convert the 3-PGA into glyceraldehyde-3-phosphate, or G3P sugar molecules. Most of the G3P molecules are recycled back into RuBP using energy from more ATP, but one out of every six produced leaves the cycle—the end product of the dark reactions.
Sugars and starches
Glyceraldehyde-3-phosphate can double up to form larger sugar molecules like glucose and fructose. These molecules are processed, and from them, the still larger sucrose, a disaccharide commonly known as table sugar, is made, though this process takes place outside of the chloroplast, in the cytoplasm.
Alternatively, glucose monomers in the chloroplast can be linked together to make starch, which accumulates into the starch grains found in the chloroplast.
Under conditions such as high atmospheric CO2 concentrations, these starch grains may grow very large, distorting the grana and thylakoids. The starch granules displace the thylakoids, but leave them intact.
Waterlogged roots can also cause starch buildup in the chloroplasts, possibly due to less sucrose being exported out of the chloroplast (or more accurately, the plant cell). This depletes a plant's free phosphate supply, which indirectly stimulates chloroplast starch synthesis.
While linked to low photosynthesis rates, the starch grains themselves may not necessarily interfere significantly with the efficiency of photosynthesis, and might simply be a side effect of another photosynthesis-depressing factor.
Photorespiration
Photorespiration can occur when the oxygen concentration is too high. RuBisCO cannot distinguish between oxygen and carbon dioxide very well, so it can accidentally add O2 instead of CO2 to RuBP. This process reduces the efficiency of photosynthesis—it consumes ATP and oxygen, releases CO2, and produces no sugar. It can waste up to half the carbon fixed by the Calvin cycle. Several mechanisms have evolved in different lineages that raise the carbon dioxide concentration relative to oxygen within the chloroplast, increasing the efficiency of photosynthesis. These mechanisms are called carbon dioxide concentrating mechanisms, or CCMs. These include Crassulacean acid metabolism, carbon fixation, and pyrenoids. Chloroplasts in plants are notable as they exhibit a distinct chloroplast dimorphism.
pH
Because of the H+ gradient across the thylakoid membrane, the interior of the thylakoid is acidic, with a pH around 4, while the stroma is slightly basic, with a pH of around 8.
The optimal stroma pH for the Calvin cycle is 8.1, with the reaction nearly stopping when the pH falls below 7.3.
CO2 in water can form carbonic acid, which can disturb the pH of isolated chloroplasts, interfering with photosynthesis, even though CO2 is used in photosynthesis. However, chloroplasts in living plant cells are not affected by this as much.
Chloroplasts can pump K+ and H+ ions in and out of themselves using a poorly understood light-driven transport system.
In the presence of light, the pH of the thylakoid lumen can drop up to 1.5 pH units, while the pH of the stroma can rise by nearly one pH unit.
Amino acid synthesis
Chloroplasts alone make almost all of a plant cell's amino acids in their stroma except the sulfur-containing ones like cysteine and methionine. Cysteine is made in the chloroplast (the proplastid too) but it is also synthesized in the cytosol and mitochondria, probably because it has trouble crossing membranes to get to where it is needed. The chloroplast is known to make the precursors to methionine but it is unclear whether the organelle carries out the last leg of the pathway or if it happens in the cytosol.
Other nitrogen compounds
Chloroplasts make all of a cell's purines and pyrimidines—the nitrogenous bases found in DNA and RNA. They also convert nitrite (NO2−) into ammonia (NH3) which supplies the plant with nitrogen to make its amino acids and nucleotides.
Other chemical products
The plastid is the site of diverse and complex lipid synthesis in plants. The carbon used to form the majority of the lipid is from acetyl-CoA, which is the decarboxylation product of pyruvate. Pyruvate may enter the plastid from the cytosol by passive diffusion through the membrane after production in glycolysis. Pyruvate is also made in the plastid from phosphoenolpyruvate, a metabolite made in the cytosol from pyruvate or PGA. Acetate in the cytosol is unavailable for lipid biosynthesis in the plastid. The typical length of fatty acids produced in the plastid are 16 or 18 carbons, with 0-3 cis double bonds.
The biosynthesis of fatty acids from acetyl-CoA primarily requires two enzymes. Acetyl-CoA carboxylase creates malonyl-CoA, used in both the first step and the extension steps of synthesis. Fatty acid synthase (FAS) is a large complex of enzymes and cofactors including acyl carrier protein (ACP) which holds the acyl chain as it is synthesized. The initiation of synthesis begins with the condensation of malonyl-ACP with acetyl-CoA to produce ketobutyryl-ACP. 2 reductions involving the use of NADPH and one dehydration creates butyryl-ACP. Extension of the fatty acid comes from repeated cycles of malonyl-ACP condensation, reduction, and dehydration.
Other lipids are derived from the methyl-erythritol phosphate (MEP) pathway and consist of gibberelins, sterols, abscisic acid, phytol, and innumerable secondary metabolites.
Location
Distribution in a plant
Not all cells in a multicellular plant contain chloroplasts. All green parts of a plant contain chloroplasts as the color comes from the chlorophyll. The plant cells which contain chloroplasts are usually parenchyma cells, though chloroplasts can also be found in collenchyma tissue. A plant cell which contains chloroplasts is known as a chlorenchyma cell. A typical chlorenchyma cell of a land plant contains about 10 to 100 chloroplasts.
In some plants such as cacti, chloroplasts are found in the stems, though in most plants, chloroplasts are concentrated in the leaves. One square millimeter of leaf tissue can contain half a million chloroplasts. Within a leaf, chloroplasts are mainly found in the mesophyll layers of a leaf, and the guard cells of stomata. Palisade mesophyll cells can contain 30–70 chloroplasts per cell, while stomatal guard cells contain only around 8–15 per cell, as well as much less chlorophyll. Chloroplasts can also be found in the bundle sheath cells of a leaf, especially in C plants, which carry out the Calvin cycle in their bundle sheath cells. They are often absent from the epidermis of a leaf.
Cellular location
Chloroplast movement
The chloroplasts of plant and algal cells can orient themselves to best suit the available light. In low-light conditions, they will spread out in a sheet—maximizing the surface area to absorb light. Under intense light, they will seek shelter by aligning in vertical columns along the plant cell's cell wall or turning sideways so that light strikes them edge-on. This reduces exposure and protects them from photooxidative damage. This ability to distribute chloroplasts so that they can take shelter behind each other or spread out may be the reason why land plants evolved to have many small chloroplasts instead of a few big ones.
Chloroplast movement is considered one of the most closely regulated stimulus-response systems that can be found in plants. Mitochondria have also been observed to follow chloroplasts as they move.
In higher plants, chloroplast movement is run by phototropins, blue light photoreceptors also responsible for plant phototropism. In some algae, mosses, ferns, and flowering plants, chloroplast movement is influenced by red light in addition to blue light, though very long red wavelengths inhibit movement rather than speeding it up. Blue light generally causes chloroplasts to seek shelter, while red light draws them out to maximize light absorption.
Studies of Vallisneria gigantea, an aquatic flowering plant, have shown that chloroplasts can get moving within five minutes of light exposure, though they don't initially show any net directionality. They may move along microfilament tracks, and the fact that the microfilament mesh changes shape to form a honeycomb structure surrounding the chloroplasts after they have moved suggests that microfilaments may help to anchor chloroplasts in place.
Differentiation, replication, and inheritance
Chloroplasts are a special type of a plant cell organelle called a plastid, though the two terms are sometimes used interchangeably. There are many other types of plastids, which carry out various functions. All chloroplasts in a plant are descended from undifferentiated proplastids found in the zygote, or fertilized egg. Proplastids are commonly found in an adult plant's apical meristems. Chloroplasts do not normally develop from proplastids in root tip meristems—instead, the formation of starch-storing amyloplasts is more common.
In shoots, proplastids from shoot apical meristems can gradually develop into chloroplasts in photosynthetic leaf tissues as the leaf matures, if exposed to the required light. This process involves invaginations of the inner plastid membrane, forming sheets of membrane that project into the internal stroma. These membrane sheets then fold to form thylakoids and grana.
If angiosperm shoots are not exposed to the required light for chloroplast formation, proplastids may develop into an etioplast stage before becoming chloroplasts. An etioplast is a plastid that lacks chlorophyll, and has inner membrane invaginations that form a lattice of tubes in their stroma, called a prolamellar body. While etioplasts lack chlorophyll, they have a yellow chlorophyll precursor stocked. Within a few minutes of light exposure, the prolamellar body begins to reorganize into stacks of thylakoids, and chlorophyll starts to be produced. This process, where the etioplast becomes a chloroplast, takes several hours. Gymnosperms do not require light to form chloroplasts.
Light, however, does not guarantee that a proplastid will develop into a chloroplast. Whether a proplastid develops into a chloroplast some other kind of plastid is mostly controlled by the nucleus and is largely influenced by the kind of cell it resides in.
Plastid interconversion
Plastid differentiation is not permanent, in fact many interconversions are possible. Chloroplasts may be converted to chromoplasts, which are pigment-filled plastids responsible for the bright colors seen in flowers and ripe fruit. Starch storing amyloplasts can also be converted to chromoplasts, and it is possible for proplastids to develop straight into chromoplasts. Chromoplasts and amyloplasts can also become chloroplasts, like what happens when a carrot or a potato is illuminated. If a plant is injured, or something else causes a plant cell to revert to a meristematic state, chloroplasts and other plastids can turn back into proplastids. Chloroplast, amyloplast, chromoplast, proplastid are not absolute; state—intermediate forms are common.
Division
Most chloroplasts in a photosynthetic cell do not develop directly from proplastids or etioplasts. In fact, a typical shoot meristematic plant cell contains only 7–20 proplastids. These proplastids differentiate into chloroplasts, which divide to create the 30–70 chloroplasts found in a mature photosynthetic plant cell. If the cell divides, chloroplast division provides the additional chloroplasts to partition between the two daughter cells.
In single-celled algae, chloroplast division is the only way new chloroplasts are formed. There is no proplastid differentiation—when an algal cell divides, its chloroplast divides along with it, and each daughter cell receives a mature chloroplast.
Almost all chloroplasts in a cell divide, rather than a small group of rapidly dividing chloroplasts. Chloroplasts have no definite S-phase—their DNA replication is not synchronized or limited to that of their host cells.
Much of what we know about chloroplast division comes from studying organisms like Arabidopsis and the red alga Cyanidioschyzon merolæ.
The division process starts when the proteins FtsZ1 and FtsZ2 assemble into filaments, and with the help of a protein ARC6, form a structure called a Z-ring within the chloroplast's stroma. The Min system manages the placement of the Z-ring, ensuring that the chloroplast is cleaved more or less evenly. The protein MinD prevents FtsZ from linking up and forming filaments. Another protein ARC3 may also be involved, but it is not very well understood. These proteins are active at the poles of the chloroplast, preventing Z-ring formation there, but near the center of the chloroplast, MinE inhibits them, allowing the Z-ring to form.
Next, the two plastid-dividing rings, or PD rings form. The inner plastid-dividing ring is located in the inner side of the chloroplast's inner membrane, and is formed first. The outer plastid-dividing ring is found wrapped around the outer chloroplast membrane. It consists of filaments about 5 nanometers across, arranged in rows 6.4 nanometers apart, and shrinks to squeeze the chloroplast. This is when chloroplast constriction begins. In a few species like Cyanidioschyzon merolæ, chloroplasts have a third plastid-dividing ring located in the chloroplast's intermembrane space.
Late into the constriction phase, dynamin proteins assemble around the outer plastid-dividing ring, helping provide force to squeeze the chloroplast. Meanwhile, the Z-ring and the inner plastid-dividing ring break down. During this stage, the many chloroplast DNA plasmids floating around in the stroma are partitioned and distributed to the two forming daughter chloroplasts.
Later, the dynamins migrate under the outer plastid dividing ring, into direct contact with the chloroplast's outer membrane, to cleave the chloroplast in two daughter chloroplasts.
A remnant of the outer plastid dividing ring remains floating between the two daughter chloroplasts, and a remnant of the dynamin ring remains attached to one of the daughter chloroplasts.
Of the five or six rings involved in chloroplast division, only the outer plastid-dividing ring is present for the entire constriction and division phase—while the Z-ring forms first, constriction does not begin until the outer plastid-dividing ring forms.
Regulation
In species of algae that contain a single chloroplast, regulation of chloroplast division is extremely important to ensure that each daughter cell receives a chloroplast—chloroplasts can't be made from scratch. In organisms like plants, whose cells contain multiple chloroplasts, coordination is looser and less important. It is likely that chloroplast and cell division are somewhat synchronized, though the mechanisms for it are mostly unknown.
Light has been shown to be a requirement for chloroplast division. Chloroplasts can grow and progress through some of the constriction stages under poor quality green light, but are slow to complete division—they require exposure to bright white light to complete division. Spinach leaves grown under green light have been observed to contain many large dumbbell-shaped chloroplasts. Exposure to white light can stimulate these chloroplasts to divide and reduce the population of dumbbell-shaped chloroplasts.
Chloroplast inheritance
Like mitochondria, chloroplasts are usually inherited from a single parent. Biparental chloroplast inheritance—where plastid genes are inherited from both parent plants—occurs in very low levels in some flowering plants.
Many mechanisms prevent biparental chloroplast DNA inheritance, including selective destruction of chloroplasts or their genes within the gamete or zygote, and chloroplasts from one parent being excluded from the embryo. Parental chloroplasts can be sorted so that only one type is present in each offspring.
Gymnosperms, such as pine trees, mostly pass on chloroplasts paternally, while flowering plants often inherit chloroplasts maternally. Flowering plants were once thought to only inherit chloroplasts maternally. However, there are now many documented cases of angiosperms inheriting chloroplasts paternally.
Angiosperms, which pass on chloroplasts maternally, have many ways to prevent paternal inheritance. Most of them produce sperm cells that do not contain any plastids. There are many other documented mechanisms that prevent paternal inheritance in these flowering plants, such as different rates of chloroplast replication within the embryo.
Among angiosperms, paternal chloroplast inheritance is observed more often in hybrids than in offspring from parents of the same species. This suggests that incompatible hybrid genes might interfere with the mechanisms that prevent paternal inheritance.
Transplastomic plants
Recently, chloroplasts have caught attention by developers of genetically modified crops. Since, in most flowering plants, chloroplasts are not inherited from the male parent, transgenes in these plastids cannot be disseminated by pollen. This makes plastid transformation a valuable tool for the creation and cultivation of genetically modified plants that are biologically contained, thus posing significantly lower environmental risks. This biological containment strategy is therefore suitable for establishing the coexistence of conventional and organic agriculture. While the reliability of this mechanism has not yet been studied for all relevant crop species, recent results in tobacco plants are promising, showing a failed containment rate of transplastomic plants at 3 in 1,000,000.
Footnotes
References
External links
Chloroplast – Cell Centered Database
Co-Extra research on chloroplast transformation
NCBI full chloroplast genome
Photosynthesis
Plastids
Endosymbiotic events | Chloroplast | [
"Chemistry",
"Biology"
] | 20,621 | [
"Symbiosis",
"Endosymbiotic events",
"Photosynthesis",
"Plastids",
"Biochemistry"
] |
6,359 | https://en.wikipedia.org/wiki/Crux | Crux () is a constellation of the southern sky that is centred on four bright stars in a cross-shaped asterism commonly known as the Southern Cross. It lies on the southern end of the Milky Way's visible band. The name Crux is Latin for cross. Even though it is the smallest of all 88 modern constellations, Crux is among the most easily distinguished as its four main stars each have an apparent visual magnitude brighter than +2.8. It has attained a high level of cultural significance in many Southern Hemisphere states and nations.
Blue-white α Crucis (Acrux) is the most southerly member of the constellation and, at magnitude 0.8, the brightest. The three other stars of the cross appear clockwise and in order of lessening magnitude: β Crucis (Mimosa), γ Crucis (Gacrux), and δ Crucis (Imai). ε Crucis (Ginan) also lies within the cross asterism. Many of these brighter stars are members of the Scorpius–Centaurus association, a large but loose group of hot blue-white stars that appear to share common origins and motion across the southern Milky Way.
Crux contains four Cepheid variables, each visible to the naked eye under optimum conditions. Crux also contains the bright and colourful open cluster known as the Jewel Box (NGC 4755) on its eastern border. Nearby to the southeast is a large dark nebula spanning 7° by 5° known as the Coalsack Nebula, portions of which are mapped in the neighbouring constellations of Centaurus and Musca.
History
The bright stars in Crux were known to the Ancient Greeks, where Ptolemy regarded them as part of the constellation Centaurus. They were entirely visible as far north as Britain in the fourth millennium BC. However, the precession of the equinoxes gradually lowered the stars below the European horizon, and they were eventually forgotten by the inhabitants of northern latitudes. By 400 AD, the stars in the constellation now called Crux never rose above the horizon throughout most of Europe. Dante may have known about the constellation in the 14th century, as he describes an asterism of four bright stars in the southern sky in his Divine Comedy. His description, however, may be allegorical, and the similarity to the constellation a coincidence.
The 15th century Venetian navigator Alvise Cadamosto made note of what was probably the Southern Cross on exiting the Gambia River in 1455, calling it the carro dell'ostro ("southern chariot"). However, Cadamosto's accompanying diagram was inaccurate. Historians generally credit João Faras for being the first European to depict it correctly. Faras sketched and described the constellation (calling it "las guardas") in a letter written on the beaches of Brazil on 1 May 1500 to the Portuguese monarch.
Explorer Amerigo Vespucci seems to have observed not only the Southern Cross but also the neighboring Coalsack Nebula on his second voyage in 1501–1502.
Another early modern description clearly describing Crux as a separate constellation is attributed to Andrea Corsali, an Italian navigator who from 1515 to 1517 sailed to China and the East Indies in an expedition sponsored by King Manuel I. In 1516, Corsali wrote a letter to the monarch describing his observations of the southern sky, which included a rather crude map of the stars around the south celestial pole including the Southern Cross and the two Magellanic Clouds seen in an external orientation, as on a globe.
Emery Molyneux and Petrus Plancius have also been cited as the first uranographers (sky mappers) to distinguish Crux as a separate constellation; their representations date from 1592, the former depicting it on his celestial globe and the latter in one of the small celestial maps on his large wall map. Both authors, however, depended on unreliable sources and placed Crux in the wrong position. Crux was first shown in its correct position on the celestial globes of Petrus Plancius and Jodocus Hondius in 1598 and 1600. Its stars were first catalogued separately from Centaurus by Frederick de Houtman in 1603. The constellation was later adopted by Jakob Bartsch in 1624 and Augustin Royer in 1679. Royer is sometimes wrongly cited as initially distinguishing Crux.
Characteristics
Crux is bordered by the constellations Centaurus (which surrounds it on three sides) on the east, north and west, and Musca to the south. Covering 68 square degrees and 0.165% of the night sky, it is the smallest of the 88 constellations. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Cru". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of four segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −55.68° and −64.70°. Its totality figures at least part of the year south of the 25th parallel north.
In tropical regions Crux can be seen in the sky from April to June. Crux is exactly opposite to Cassiopeia on the celestial sphere, and therefore it cannot appear in the sky with the latter at the same time. In this era, south of Cape Town, Adelaide, and Buenos Aires (the 34th parallel south), Crux is circumpolar and thus always appears in the sky.
Crux is sometimes confused with the nearby False Cross asterism by stargazers. The False Cross consists of stars in Carina and Vela, is larger and dimmer, does not have a fifth star, and lacks the two prominent nearby "Pointer Stars". Between the two is the even larger and dimmer Diamond Cross.
Visibility
Crux is easily visible from the southern hemisphere, south of 35th parallel at practically any time of year as circumpolar. It is also visible near the horizon from tropical latitudes of the northern hemisphere for a few hours every night during the northern winter and spring. For instance, it is visible from Cancun or any other place at latitude 25° N or less at around 10 pm at the end of April. There are 5 main stars.
Due to precession, Crux will move closer to the South Pole in the next millennia, up to 67 degrees south declination for the middle of the constellation. However, by the year 14,000, Crux will be visible for most parts of Europe and the continental United States. Its visibility will extend to North Europe by the year 18,000 when it will be less than 30 degrees south declination.
Use in navigation
In the Southern Hemisphere, the Southern Cross is frequently used for navigation in much the same way that Polaris is used in the Northern Hemisphere. Projecting a line from γ to α Crucis (the foot of the crucifix) approximately times beyond gives a point close to the Southern Celestial Pole which is also, coincidentally, where intersects a perpendicular line taken southwards from the east–west axis of Alpha Centauri to Beta Centauri, which are stars at an alike declination to Crux and of a similar width as the cross, but higher magnitude. Argentine gauchos are documented as using Crux for night orientation in the Pampas and Patagonia.
Alpha and Beta Centauri are of similar declinations (thus distance from the pole) and are often referred as the "Southern Pointers" or just "The Pointers", allowing people to easily identify the Southern Cross, the constellation of Crux. Very few bright stars lie between Crux and the pole itself, although the constellation Musca is fairly easily recognised immediately south of Crux.
Bright stars
Down to apparent magnitude +2.5 are 92 stars that shine the brightest as viewed from the Earth. Three of these stars are in Crux making it the most densely populated as to those stars (this being 3.26% of these 92 stars, and in turn being 19.2 times more than the expected 0.17% that would result on a homogenous distribution of all bright stars and a randomised drawing of all 88 constellations, given its area, 0.17% of the sky).
Features
Stars
Within the constellation's borders, there are 49 stars brighter than or equal to apparent magnitude 6.5. The four main stars that form the asterism are Alpha, Beta, Gamma, and Delta Crucis.
α Crucis or Acrux is a triple star 321 light-years from Earth. A rich blue in colour, with a visual magnitude 0.8 to the unaided eye, it has two close components of a similar magnitude, 1.3 and 1.8 respectively, plus another much wider component of the 5th magnitude. The two close components are resolved in a small amateur telescope and the wide component is readily visible in a pair of binoculars.
β Crucis or Mimosa is a blue-hued giant star of magnitude 1.3, and lies 353 light-years from Earth. It is a Beta Cephei-type variable star with a variation of less than 0.1 magnitudes.
γ Crucis or Gacrux is an optical double star. The primary is a red-hued giant star of magnitude 1.6, 88 light-years from Earth, and is one of the closest red giants to Earth. Its secondary component is magnitude 6.5, 264 light-years from Earth.
δ Crucis (Imai) is a magnitude 2.8 blue-white hued star about 345 light-years from Earth. Like Mimosa it is a Beta Cepheid variable.
There is also a fifth star, that is often included with the Southern Cross.
ε Crucis (Ginan) is an orange-hued giant star of magnitude 3.6, 228 light-years from Earth.
There are several other naked-eye stars within the borders of Crux, especially:
Iota Crucis is a visual double star 125 light-years from Earth. The primary is an orange-hued giant of magnitude 4.6 and the secondary at magnitude 9.5.
Mu Crucis or Mu1,2 Crucis is a wide double star where the components are about 370 light-years from Earth. Equally blue-white in colour, the components are magnitude 4.0 and 5.1 respectively, and are easily divisible in small amateur telescopes or large binoculars.
Scorpius–Centaurus association
Unusually, a total of 15 of the 23 brightest stars in Crux are spectrally blue-white B-type stars. Among the five main bright stars, Delta, and probably Alpha and Beta, are likely co-moving B-type members of the Scorpius–Centaurus association, the nearest OB association to the Sun. They are among the highest-mass stellar members of the Lower Centaurus–Crux subgroup of the association, with ages of roughly 10 to 20 million years. Other members include the blue-white stars Zeta, Lambda and both the components of the visual double star, Mu.
Variable stars
Crux contains many variable stars. It boasts four Cepheid variables that may all reach naked eye visibility.
BG Crucis ranges from magnitude 5.34 to 5.58 over 3.3428 days,
T Crucis ranges from 6.32 to 6.83 over 6.73331 days,
S Crucis ranges from 6.22 to 6.92 over 4.68997 days,
R Crucis ranges from 6.4 to 7.23 over 5.82575 days.
Other well studied variable stars includes:
Lambda Crucis and Theta2 Crucis, that are both Beta Cepheid type variable stars.
BH Crucis, also known as Welch's Red Variable, is a Mira variable that ranges from magnitude 6.6 to 9.8 over 530 days. Discovered in October 1969, it has become redder and brighter (mean magnitude changing from 8.047 to 7.762) and its period lengthened by 25% in the first thirty years since its discovery.
Host star exoplanets in Crux
The star HD 106906 has been found to have a planet—HD 106906 b—that has one of the widest orbits of any currently known planetary-mass companions.
Objects beyond the Local Arm
Crux is backlit by the multitude of stars of the Scutum-Crux Arm (more commonly called the Scutum-Centaurus Arm) of the Milky Way. This is the main inner arm in the local radial quarter of the galaxy. Part-obscuring this is:
The Coalsack Nebula lies partially within Crux and partly in the neighboring constellations of Musca and Centaurus. It is the most prominent dark nebula in the skies, and is easily visible to the naked eye as a prominent dark patch in the southern Milky Way. It can be found 6.5° southeast from the centre of Crux or 3° east from α Crucis. Its large area covers about 7° by 5°, and is away from Earth.
A key feature of the Scutum-Crux Arm is:
The Jewel Box, κ Crucis Cluster or NGC 4755, is a small but bright open cluster that appears as a fuzzy star to the naked eye and is very close to the easternmost boundary of Crux: about 1° southeast of Beta Crucis. The combined or total magnitude is 4.2 and it lies at a distance of from Earth. The cluster was given its name by John Herschel, based on the range of colours visible throughout the star cluster in his telescope. About seven million years old, it is one of the youngest open clusters in the Milky Way, and it appears to have the shape of a letter 'A'. The Jewel Box Cluster is classified as Shapley class 'g' and Trumpler class 'I 3 r -' cluster; it is a very rich, centrally-concentrated cluster detached from the surrounding star field. It has more than 100 stars that range significantly in brightness. The brightest cluster stars are mostly blue supergiants, though the cluster contains at least one red supergiant. Kappa Crucis is a true member of the cluster that bears its name, and is one of the brighter stars at magnitude 5.9.
Cultural significance
The most prominent feature of Crux is the distinctive asterism known as the Southern Cross. It has great significance in the cultures of the southern hemisphere, particularly of Australia, Brazil, Chile and New Zealand.
Flags and symbols
Several southern countries and organisations have traditionally used Crux as a national or distinctive symbol. The four or five brightest stars of Crux appear, heraldically standardised in various ways, on the flags of Australia, Brazil, New Zealand, Papua New Guinea and Samoa. They also appear on the flags of the Australian state of Victoria, the Australian Capital Territory, the Northern Territory, as well as the flag of Magallanes Region of Chile, the flag of Londrina (Brazil) and several Argentine provincial flags and emblems (for example, Tierra del Fuego and Santa Cruz). The flag of the Mercosur trading zone displays the four brightest stars. Crux also appears on the Brazilian coat of arms and, , on the cover of Brazilian passports.
Five stars appear in the logo of the Brazilian football team Cruzeiro Esporte Clube and in the insignia of the Order of the Southern Cross, and the cross has featured as name of the Brazilian currency (the cruzeiro from 1942 to 1986 and again from 1990 to 1994). All coins of the (1998) series of the Brazilian real display the constellation.
Songs and literature reference the Southern Cross, including the Argentine epic poem Martín Fierro. The Argentinian singer Charly García says that he is "from the Southern Cross" in the song "No voy en tren".
The Cross gets a mention in the lyrics of the Brazilian National Anthem (1909): "A imagem do Cruzeiro resplandece" ("the image of the Cross shines").
The Southern Cross is mentioned in the Australian National Anthem, "Beneath our radiant Southern Cross we'll toil with hearts and hands"
The Southern Cross features in the coat of arms of William Birdwood, 1st Baron Birdwood, the British officer who commanded the Australian and New Zealand Army Corps during the Gallipoli Campaign of the First World War.
The Southern Cross is also mentioned in the Samoan
National Anthem.
"Vaai 'i na fetu o lo'u a agiagia ai: Le faailoga lea o Iesu, na maliu ai mo Samoa." ("Look at those stars that are waving on it: This is the symbol of Jesus, who died on it for Samoa.")
The 1952-53 NBC Television Series Victory At Sea contained a musical number entitled "Beneath the Southern Cross".
"Southern Cross" is a single released by Crosby, Stills and Nash in 1981. It reached #18 on Billboard Hot 100 in late 1982.
"The Sign of the Southern Cross" is a song released by Black Sabbath in 1981. The song was released on the album "Mob Rules".
The Order of the Southern Cross is a Brazilian order of chivalry awarded to "those who have rendered significant service to the Brazilian nation".
In "O Sweet Saint Martin's Land", the lyrics mention the Southern Cross: Thy Southern Cross the night.
A stylized version of Crux appears on the Australian Eureka Flag. The constellation was also used on the dark blue, shield-like patch worn by personnel of the U.S. Army's Americal Division, which was organized in the Southern Hemisphere, on the island of New Caledonia, and also on the blue diamond of the U.S. 1st Marine Division, which fought on the Southern Hemisphere islands of Guadalcanal and New Britain.
The Petersflagge flag of the German East Africa Company of 1885–1920, which included a constellation of five white five-pointed Crux "stars" on a red ground, later served as the model for symbolism associated with generic German colonial-oriented organisations: the Reichskolonialbund of 1936–1943 and the (1956/1983 to the present).
Southern Cross station is a major rail terminal in Melbourne, Australia.
The Personal Ordinariate of Our Lady of the Southern Cross is a personal ordinariate of the Roman Catholic Church primarily within the territory of the Australian Catholic Bishops Conference for groups of Anglicans who desire full communion with the Catholic Church in Australia and Asia.
The Knights of the Southern Cross (KSC) is a Catholic fraternal order throughout Australia.
Various cultures
In India, there is a story related to the creation of Trishanku Swarga (त्रिशंकु), meaning Cross (Crux), created by Sage Vishwamitra.
In Chinese, (), meaning Cross, refers to an asterism consisting of γ Crucis, α Crucis, β Crucis and δ Crucis.
In Australian Aboriginal astronomy, Crux and the Coalsack mark the head of the 'Emu in the Sky' (which is seen in the dark spaces rather than in the patterns of stars) in several Aboriginal cultures, while Crux itself is said to be a possum sitting in a tree (Boorong people of the Wimmera region of northwestern Victoria), a representation of the sky deity Mirrabooka (Quandamooka people of Stradbroke Island), a stingray (Yolngu people of Arnhem Land), or an eagle (Kaurna people of the Adelaide Plains). Two Pacific constellations also included Gamma Centauri. Torres Strait Islanders in modern-day Australia saw Gamma Centauri as the handle and the four stars as the left hand of Tagai, and the stars of Musca as the trident of the fishing spear he is holding. In Aranda traditions of central Australia, the four Cross stars are the talon of an eagle and Gamma Centauri as its leg.
Various peoples in the East Indies and Brazil viewed the four main stars as the body of a ray. In both Indonesia and Malaysia, it is known as Bintang Pari and Buruj Pari, respectively ("ray stars"). This aquatic theme is also shared by an archaic name of the constellation in Vietnam, where it was once known as sao Cá Liệt (the ponyfish star).
Among Filipino people, the southern cross have various names pertaining to tops, including kasing (Visayan languages), paglong (Bikol), and pasil (Tagalog). It is also called butiti (puffer fish) in Waray.
The Javanese people of Indonesia called this constellation Gubug pèncèng ("raking hut") or lumbung ("the granary"), because the shape of the constellation was like that of a raking hut.
The Southern Cross (α, β, γ and δ Crucis) together with μ Crucis is one of the asterisms used by Bugis sailors for navigation, called bintoéng bola képpang, meaning "incomplete house star"
The Māori name for the Southern Cross is Māhutonga and it is thought of as the anchor (Te Punga) of Tama-rereti's waka (the Milky Way), while the Pointers are its rope. In Tonga it is known as Toloa ("duck"); it is depicted as a duck flying south, with one of his wings (δ Crucis) wounded because Ongo tangata ("two men", α and β Centauri) threw a stone at it. The Coalsack is known as Humu (the "triggerfish"), because of its shape. In Samoa the constellation is called Sumu ("triggerfish") because of its rhomboid shape, while α and β Centauri are called Luatagata (Two Men), just as they are in Tonga. The peoples of the Solomon Islands saw several figures in the Southern Cross. These included a knee protector and a net used to catch Palolo worms. Neighboring peoples in the Marshall Islands saw these stars as a fish. Peninsular Malays also see the likeness of a fish in the Crux, particularly the Scomberomorus or its local name Tohok.
In Mapudungun, the language of Patagonian Mapuches, the name of the Southern Cross is Melipal, which means "four stars". In Quechua, the language of the Inca civilization, Crux is known as "Chakana", which means literally "stair" (chaka, bridge, link; hanan, high, above), but carries a deep symbolism within Quechua mysticism. Alpha and Beta Crucis make up one foot of the Great Rhea, a constellation encompassing Centaurus and Circinus along with the two bright stars. The Great Rhea was a constellation of the Bororo of Brazil. The Mocoví people of Argentina also saw a rhea including the stars of Crux. Their rhea is attacked by two dogs, represented by bright stars in Centaurus and Circinus. The dogs' heads are marked by Alpha and Beta Centauri. The rhea's body is marked by the four main stars of Crux, while its head is Gamma Centauri and its feet are the bright stars of Musca. The Bakairi people of Brazil had a sprawling constellation representing a bird snare. It included the bright stars of Crux, the southern part of Centaurus, Circinus, at least one star in Lupus, the bright stars of Musca, Beta and the optical double star Delta1,2 Chamaeleontis: and some of the stars of Volans, and Mensa. The Kalapalo people of Mato Grosso state in Brazil saw the stars of Crux as Aganagi angry bees having emerged from the Coalsack, which they saw as the beehive.
Among Tuaregs, the four most visible stars of Crux are considered iggaren, i.e. four Maerua crassifolia trees. The Tswana people of Botswana saw the constellation as Dithutlwa, two giraffes – Alpha and Beta Crucis forming a male, and Gamma and Delta forming the female.
See also
Trishanku
Northern Cross
Crux (Chinese astronomy)
Notes
References
Citations
Sources
External links
Finding the South Pole in the sky
The clickable Crux
Southern Cross in Te Ara – the Encyclopedia of New Zealand
Andrea Corsali – Letter to Giuliano de Medici, 1516 showing the Southern Cross at the State Library of NSW
Letter of Andrea Corsali 1516–1989: with additional material ("the first description and illustration of the Southern Cross, with speculations about Australia ...") digitised by the National Library of Australia.
'The Southern Cross': A Poem by Adam Sedia
National symbols of Australia
National symbols of Brazil
National symbols of New Zealand
National symbols of Papua New Guinea
National symbols of Samoa
Southern constellations
Heraldic charges
Constellations listed by Petrus Plancius | Crux | [
"Astronomy"
] | 5,237 | [
"Crux",
"Constellations listed by Petrus Plancius",
"Southern constellations",
"Constellations"
] |
6,362 | https://en.wikipedia.org/wiki/Cetus | Cetus () is a constellation, sometimes called 'the whale' in English. The Cetus was a sea monster in Greek mythology which both Perseus and Heracles needed to slay. Cetus is in the region of the sky that contains other water-related constellations: Aquarius, Pisces and Eridanus.
Features
Ecliptic
Cetus is not among the 12 true zodiac constellations in the J2000 epoch, nor classical 12-part zodiac. The ecliptic passes less than 0.25° from one of its corners. Thus the moon and planets will enter Cetus (occulting any stars as a foreground object) in 50% of their successive orbits briefly and the southern part of the sun appears in Cetus for about one day each year. Many asteroids in belts have longer phases occulting the north-western part of Cetus, those with a slightly greater inclination to the ecliptic than the moon and planets.
As seen from Mars, the ecliptic (apparent plane of the sun and also the average plane of the planets which is almost the same) passes into it.
Stars
Mira ("wonderful", named by Bayer: Omicron Ceti, a star of the neck of the asterism) was the first variable star to be discovered and the prototype of its class, Mira variables. Over a period of 332 days, it reaches a maximum apparent magnitude of 3 - visible to the naked eye - and dips to a minimum magnitude of 10, invisible to the unaided eye. Its seeming appearance and disappearance gave it its name. Mira pulsates with a minimum size of 400 solar diameters and a maximum size of 500 solar diameters. 420 light-years from Earth, it was discovered by David Fabricius in 1596.
α Ceti, traditionally called Menkar ("the nose"), is a red-hued giant star of magnitude 2.5, 220 light-years from Earth. It is a wide double star; the secondary is 93 Ceti, a blue-white hued star of magnitude 5.6, 440 light-years away. β Ceti, also called Deneb Kaitos and Diphda is the brightest star in Cetus. It is an orange-hued giant star of magnitude 2.0, 96 light-years from Earth. The traditional name "Deneb Kaitos" means "the whale's tail". γ Ceti, Kaffaljidhma ("head of the whale") is a very close double star. The primary is a blue-hued star of magnitude 3.5, 82 light-years from Earth, and the secondary is an F-type star of magnitude 6.6. Tau Ceti is noted for being a near Sun-like star at a distance of 11.9 light-years. It is a yellow-hued main-sequence star of magnitude 3.5.
AA Ceti is a triple star system; the brightest member has a magnitude of 6.2. The primary and secondary are separated by 8.4 arcseconds at an angle of 304 degrees. The tertiary is not visible in telescopes. AA Ceti is an eclipsing variable star; the tertiary star passes in front of the primary and causes the system's apparent magnitude to decrease by 0.5 magnitudes. UV Ceti is an unusual binary variable star. At 8.7 light-years from Earth, the system consists of two red dwarfs. Both of magnitude 13. One of the stars is a flare star, which are prone to sudden, random outbursts that last several minutes; these increase the pair's apparent brightness significantly - as high as magnitude 7.
Deep-sky objects
Cetus lies far from the galactic plane, so that many distant galaxies are visible, unobscured by dust from the Milky Way. Of these, the brightest is Messier 77 (NGC 1068), a 9th magnitude spiral galaxy near Delta Ceti. It appears face-on and has a clearly visible nucleus of magnitude 10. About 50 million light-years from Earth, M77 is also a Seyfert galaxy and thus a bright object in the radio spectrum. Recently, the galactic cluster JKCS 041 was confirmed to be the most distant cluster of galaxies yet discovered. The Pisces–Cetus Supercluster Complex is a galaxy filament that is one of the largest known structures in the observable Universe; it contains the Virgo supercluster which contains the Local Group of Milky Way and other galaxies.
The massive cD galaxy Holmberg 15A is also found in Cetus; as are the spiral galaxy NGC 1042, the elliptical galaxy NGC 1052 and the ultra-diffuse galaxy NGC 1052-DF2.
IC 1613 (Caldwell 51) is an irregular dwarf galaxy near the star 26 Ceti and is a member of the Local Group.
NGC 246 (Caldwell 56), also called the "Cetus Ring", is a planetary nebula with a magnitude of 8.0 at 1600 light-years from Earth. Among some amateur astronomers, NGC 246 has garnered the nickname "Pac-Man Nebula" because of the arrangement of its central stars and the surrounding star field.
The Wolf–Lundmark–Melotte (WLM) is a barred irregular galaxy discovered in 1909 by Max Wolf, located on the outer edges of the Local Group. The discovery of the nature of the galaxy was accredited to Knut Lundmark and Philibert Jacques Melotte in 1926.
UGC 1646, which is a spiral galaxy, also lies between the borders of the constellation. It is about 150 million light-years away from us. It can be seen near TYC 43-234-1 star.
History and mythology
Cetus may have originally been associated with a whale, which would have had mythic status amongst Mesopotamian cultures. It is often now called the Whale, though it is most strongly associated with Cetus the sea-monster, who was slain by Perseus as he saved the princess Andromeda from Poseidon's wrath. It is in the middle of "The Sea" recognised by mythologists, a set of water-associated constellations, its other members being Eridanus, Pisces, Piscis Austrinus and Aquarius.
Cetus has been depicted in many ways throughout its history. In the 17th century, Cetus was depicted as a "dragon fish" by Johann Bayer. Both Willem Blaeu and Andreas Cellarius depicted Cetus as a whale-like creature in the same century. However, Cetus has also been variously depicted with animal heads attached to a piscine body.
In global astronomy
In Chinese astronomy, the stars of Cetus are found among two areas: the Black Tortoise of the North (北方玄武, Běi Fāng Xuán Wǔ) and the White Tiger of the West (西方白虎, Xī Fāng Bái Hǔ).
The Tukano and Kobeua people of the Amazon used the stars of Cetus to create a jaguar, representing the god of hurricanes and other violent storms. Lambda, Mu, Xi, Nu, Gamma, and Alpha Ceti represented its head; Omicron, Zeta, and Chi Ceti represented its body; Eta Eri, Tau Cet, and Upsilon Cet marked its legs and feet; and Theta, Eta, and Beta Ceti delineated its tail.
In Hawaii, the constellation was called Na Kuhi, and Mira (Omicron Ceti) may have been called Kane.
Namesakes
USS Cetus (AK-77) was a United States Navy Crater class cargo ship named after the constellation.
See also
Cetus (Chinese astronomy)
Book of Jonah
References
Bibliography
Ian Ridpath and Wil Tirion (2007). Stars and Planets Guide, Collins, London. . Princeton University Press, Princeton.
External links
The Deep Photographic Guide to the Constellations: Cetus
The clickable Cetus
Ian Ridpath's Star Tales – Cetus
Warburg Institute Iconographic Database (medieval and early modern images of Cetus)
Constellations
Equatorial constellations
Constellations listed by Ptolemy
Fictional cetaceans
Legendary mammals | Cetus | [
"Astronomy"
] | 1,676 | [
"Constellations listed by Ptolemy",
"Cetus",
"Constellations",
"Sky regions",
"Equatorial constellations"
] |
6,363 | https://en.wikipedia.org/wiki/Carina%20%28constellation%29 | Carina ( ) is a constellation in the southern sky. Its name is Latin for the keel of a ship, and it was the southern foundation of the larger constellation of Argo Navis (the ship Argo) until it was divided into three pieces, the other two being Puppis (the poop deck), and Vela (the sails of the ship).
History and mythology
Carina was once a part of Argo Navis, the great ship of the mythical Jason and the Argonauts who searched for the Golden Fleece. The constellation of Argo was introduced in ancient Greece. However, due to the massive size of Argo Navis and the sheer number of stars that required separate designation, Nicolas-Louis de Lacaille divided Argo into three sections in 1763, including Carina (the hull or keel). In the 19th century, these three became established as separate constellations, and were formally included in the list of 88 modern IAU constellations in 1930. Lacaille kept a single set of Greek letters for the whole of Argo, and separate sets of Latin letter designations for each of the three sections. Therefore, Carina has the α, β and ε, Vela has γ and δ, Puppis has ζ, and so on.
Notable features
Stars
Carina contains Canopus, a white-hued supergiant that is the second-brightest star in the night sky at magnitude −0.72. Alpha Carinae, as Canopus is formally designated, is 313 light-years from Earth. Its traditional name comes from the mythological Canopus, who was a navigator for Menelaus, king of Sparta.
There are several other stars above magnitude 3 in Carina. Beta Carinae, traditionally called Miaplacidus, is a blue-white-hued star of magnitude 1.7, 111 light-years from Earth. Epsilon Carinae is an orange-hued giant star similarly bright to Miaplacidus at magnitude 1.9; it is 630 light-years from Earth. Another fairly bright star is the blue-white-hued Theta Carinae; it is a magnitude 2.7 star 440 light-years from Earth. Theta Carinae is also the most prominent member of the cluster IC 2602. Iota Carinae is a white-hued supergiant star of magnitude 2.2, 690 light-years from Earth.
Eta Carinae is the most prominent variable star in Carina, with a mass of approximately 100 solar masses and 4 million times as bright as the Sun. It was first discovered to be unusual in 1677, when its magnitude suddenly rose to 4, attracting the attention of Edmond Halley. Eta Carinae is inside NGC 3372, commonly called the Carina Nebula. It had a long outburst in 1827, when it brightened to magnitude 1, only fading to magnitude 1.5 in 1828. Its most prominent outburst made Eta Carinae the equal of Sirius; it brightened to magnitude −1.5 in 1843. In the decades following 1843 it appeared relatively placid, having a magnitude between 6.5 and 7.9. However, in 1998, it brightened again, though only to magnitude 5.0, a far less drastic outburst. Eta Carinae is a binary star, with a companion that has a period of 5.5 years; the two stars are surrounded by the Homunculus Nebula, which is composed of gas that was ejected in 1843.
There are several less prominent variable stars in Carina. l Carinae is a Cepheid variable noted for its brightness; it is the brightest Cepheid that is variable to the unaided eye. It is a yellow-hued supergiant star with a minimum magnitude of 4.2 and a maximum magnitude of 3.3; it has a period of 35.5 days.
V382 Carinae is a yellow hypergiant, one of the rarest types of stars. It is a slow irregular variable, with a minimum magnitude of 4.05 and a maximum magnitude of 3.77. As a hypergiant, V382 Carinae is a luminous star, with 212,000 times more luminosity than the Sun and over 480 times the Sun's size.
Two bright Mira variable stars are in Carina: R Carinae and S Carinae; both stars are red giants. R Carinae has a minimum magnitude of 10.0 and a maximum magnitude of 4.0. Its period is 309 days and it is 416 light-years from Earth. S Carinae is similar, with a minimum magnitude of 10.0 and a maximum magnitude of 5.0. However, S Carinae has a shorter period—150 days, though it is much more distant at 1,300 light-years from Earth.
Carina is home to several double stars and binary stars. Upsilon Carinae is a binary star with two blue-white-hued giant components, 1,600 light-years from Earth. The primary is of magnitude 3.0 and the secondary is of magnitude 6.0; the two components are distinguishable in a small amateur telescope.
Two asterisms are prominent in Carina. The 'Diamond Cross' is composed of the stars Beta, Theta, Upsilon and Omega Carinae. The Diamond Cross is visible south of 20ºN latitude, and is larger but fainter than the Southern Cross in Crux. Flanking the Diamond Cross is the False cross, composed of four stars - two stars in Carina, Iota Carinae and Epsilon Carinae, and two stars in Vela, Kappa Velorum and Delta Velorum - and is often mistaken for the Southern Cross, causing errors in astronavigation.
Deep-sky objects
Carina is known for its namesake nebula, NGC 3372, discovered by French astronomer Nicolas-Louis de Lacaille in 1751, which contains several nebulae. The Carina Nebula overall is an extended emission nebula approximately 8,000 light-years away and 300 light-years wide that includes vast star-forming regions. It has an overall magnitude of 8.0 and an apparent diameter of over 2 degrees. Its central region is called the Keyhole, or the Keyhole Nebula. This was described in 1847 by John Herschel, and likened to a keyhole by Emma Converse in 1873. The Keyhole is about seven light-years wide and is composed mostly of ionized hydrogen, with two major star-forming regions. The Homunculus Nebula is a planetary nebula visible to the naked eye that is being ejected by the erratic luminous blue variable star Eta Carinae, the most massive visible star known. Eta Carinae is so massive that it has reached the theoretical upper limit for the mass of a star and is therefore unstable. It is known for its outbursts; in 1840 it briefly became one of the brightest stars in the sky due to a particularly massive outburst, which largely created the Homunculus Nebula. Because of this instability and history of outbursts, Eta Carinae is considered a prime supernova candidate for the next several hundred thousand years because it has reached the end of its estimated million-year life span.
NGC 2516 is an open cluster that is both quite large (approximately half a degree square) and bright, visible to the unaided eye. It is located 1,100 light-years from Earth and has approximately 80 stars, the brightest of which is a red giant star of magnitude 5.2. NGC 3114 is another open cluster approximately of the same size, though it is more distant at 3,000 light-years from Earth. It is more loose and dim than NGC 2516, as its brightest stars are only 6th magnitude. The most prominent open cluster in Carina is IC 2602, also called the "Southern Pleiades". It contains Theta Carinae, along with several other stars visible to the unaided eye. In total, the cluster possesses approximately 60 stars. The Southern Pleiades is particularly large for an open cluster, with a diameter of approximately one degree. Like IC 2602, NGC 3532 is visible to the unaided eye and is of comparable size. It possesses approximately 150 stars that are arranged in an unusual shape, approximating an ellipse with a dark central area. Several prominent orange giants are among the cluster's bright stars, of the 7th magnitude. Superimposed on the cluster is Chi Carinae, a yellow-white-hued star of magnitude 3.9, far more distant than NGC 3532.
Carina also contains the naked-eye globular cluster NGC 2808. Epsilon Carinae and Upsilon Carinae are double stars visible in small telescopes.
One noted galaxy cluster is 1E 0657-56, the Bullet Cluster. At a distance of 4 billion light-years (redshift 0.296), this galaxy cluster is named for the shock wave seen in the intracluster medium, which resembles the shock wave of a supersonic bullet. The bow shock visible is thought to be due to the smaller galaxy cluster moving through the intracluster medium at a relative speed of 3,000–4,000 kilometers per second to the larger cluster. Because this gravitational interaction has been ongoing for hundreds of millions of years, the smaller cluster is being destroyed and will eventually merge with the larger cluster.
Meteors
Carina contains the radiant of the Eta Carinids meteor shower, which peaks around January 21 each year.
Equivalents
From China (especially northern China), the stars of Carina can barely be seen. The star Canopus (the south polar star in Chinese astronomy) was located by Chinese astronomers in the Vermilion Bird of the South (南方朱雀, Nán Fāng Zhū Què). The rest of the stars were first classified by Xu Guanggi during the Ming dynasty, based on the knowledge acquired from western star charts, and placed among The Southern Asterisms (近南極星區, Jìnnánjíxīngōu).
Polynesian peoples had no name for the constellation in particular, though they had many names for Canopus.
The Māori name Ariki ("High-born"), and the Hawaiian Ke Alii-o-kona-i-ka-lewa, "The Chief of the southern expanse" both attest to the star's prominence in the southern sky, while the Māori Atutahi, "First-light" or "Single-light", and the Tuamotu Te Tau-rari and Marere-te-tavahi, "He who stands alone". refer to the star's solitary nature.
It was also called Kapae-poto ("Short horizon"), because it rarely sets from the vantage point of New Zealand, and Kauanga ("Solitary"), when it was the last star visible before sunrise.
Future
Carina is in the southern sky quite near the south celestial pole, making it never set (circumpolar) for most of the southern hemisphere. Due to precession of Earth's axis, by the year 4700 the south celestial pole will be in Carina. Three bright stars in Carina will come within 1 degree of the southern celestial pole and take turns as the southern pole star: Omega Carinae (mag 3.29) in 5600, Upsilon Carinae (mag 2.97) in 6700, and Iota Carinae (mag 2.21) in 7900. About 13,860 CE, the bright Canopus (−0.7) will have a greater declination than −82°.
Namesakes
was a United States Navy Crater-class cargo ship named after the constellation.
the Toyota Carina was named after it.
See also
Carina in Chinese astronomy
List of brightest stars
References
Secondary sources
External links
The Deep Photographic Guide to the Constellations: Carina
Starry Night Photography: Carina
Eta Carina Nebula by Thomas Willig
Star Tales – Carina
The clickable Carina
Huge gamma-ray blast seen 12.2 billion light-years from Earth
Constellations
Southern constellations
Constellations listed by Lacaille | Carina (constellation) | [
"Astronomy"
] | 2,486 | [
"Southern constellations",
"Constellations",
"Constellations listed by Lacaille",
"Carina (constellation)",
"Sky regions"
] |
6,364 | https://en.wikipedia.org/wiki/Camelopardalis | Camelopardalis is a large but faint constellation of the northern sky representing a giraffe. The constellation was introduced in 1612 or 1613 by Petrus Plancius. Some older astronomy books give Camelopardalus or Camelopardus as alternative forms of the name, but the version recognized by the International Astronomical Union matches the genitive form, seen suffixed to most of its key stars.
Etymology
First attested in English in 1785, the word camelopardalis comes from Latin, and it is the romanization of the Greek "καμηλοπάρδαλις" meaning "giraffe", from "κάμηλος" (kamēlos), "camel" + "πάρδαλις" (pardalis), "spotted", because it has a long neck like a camel and spots like a leopard.
Features
Stars
Although Camelopardalis is the 18th largest constellation, it is not a particularly bright constellation, as the brightest stars are only of fourth magnitude. In fact, it only contains four stars brighter than magnitude 5.0.
α Cam is a blue-hued supergiant star of magnitude 4.3, over 6,000 light-years from Earth. It is one of the most distant stars easily visible with the naked eye.
β Cam is the brightest star in Camelopardalis with an apparent magnitude of 4.03. This star is a double star, with components of magnitudes 4.0 and 8.6. The primary is a yellow-hued supergiant 1000 light-years from Earth.
11 Cam is a star of magnitude 5.2, 650 light-years from Earth. It appears without intense magnification very close to magnitude 6.1 12 Cam, at about the same distance from us, but the two are not a true double star; they have considerable separation.
Σ 1694 (Struve 1694, 32H Cam) is a binary star 300 light-years from Earth. Both components have a blue-white hue; the primary is of magnitude 5.4 and the secondary is of magnitude 5.9.
CS Cam is the second brightest star, though it has neither a Bayer nor a Flamsteed designation. It is of magnitude 4.21 and is slightly variable.
Z Cam (varying from amateur telescope visibility to extremely faint) is frequently observed as part of a program of AAVSO. It is the prototype of Z Camelopardalis variable stars.
Other variable stars are U Camelopardalis, VZ Camelopardalis, and Mira variables T Camelopardalis, X Camelopardalis, and R Camelopardalis. RU Camelopardalis is one of the brighter Type II Cepheids visible in the night sky.
In 2011 a supernova was discovered in the constellation.
Deep-sky objects
Camelopardalis is in the part of the celestial sphere facing away from the galactic plane. Accordingly, many distant galaxies are visible within its borders.
NGC 2403 is a galaxy in the M81 group of galaxies, located approximately 12 million light-years from Earth with a redshift of 0.00043. It is classified as being between an elliptical and a spiral galaxy because it has faint arms and a large central bulge. NGC 2403 was first discovered by the 18th century astronomer William Herschel, who was working in England at the time. It has an integrated magnitude of 8.0 and is approximately 0.25° long.
NGC 1502 is a magnitude 6.9 open cluster about 3,000 light years from Earth. It has about 45 bright members, and features also a double star of magnitude 7.0 at its center. NGC 1502 is also associated with Kemble's Cascade, a simple but beautiful asterism appearing in the sky as a chain of stars 2.5° long that is parallel to the Milky Way and is pointed towards Cassiopeia. * NGC 1501 is a planetary nebula located roughly 1.4° south of NGC 1502.
Stock 23 is an open star cluster at the southern part of the border between Camelopardalis and Cassiopeia. It is also known as Pazmino's Cluster. It could be categorized as an asterism because of the small number of stars in it (a small telescopic constellation).
IC 342 is one of the brightest two galaxies in the IC 342/Maffei Group of galaxies.
The dwarf irregular galaxy NGC 1569 is a magnitude 11.9 starburst galaxy, about 11 million light years away.
NGC 2655 is a large lenticular galaxy with visual magnitude 10.1.
UGC 3697 is known as the Integral Sign Galaxy (its location is 7:11:4 / +71°50').
MS0735.6+7421 is a galaxy cluster with a redshift of 0.216, located 2.6 billion light-years from Earth. It is unique for its intracluster medium, which emits X-rays at a very high rate. This galaxy cluster features two cavities 600,000 light-years in diameter, caused by its central supermassive black hole, which emits jets of matter. MS0735.6+7421 is one of the largest and most distant examples of this phenomenon.
Tombaugh 5 is a fairly dim open cluster in Camelopardalis. It has an overall magnitude of 8.4 and is located 5,800 light-years from Earth. It is a Shapley class c and Trumpler class III 1 r cluster, meaning that it is irregularly shaped and appears loose. Though it is detached from the star field, it is not concentrated at its center at all. It has more than 100 stars which do not vary widely in brightness, mostly being of the 15th and 16th magnitude.
NGC 2146 is an 11th magnitude barred spiral starburst galaxy conspicuously warped by interaction with a neighbour.
MACS0647-JD, one of the possible candidates for the farthest known galaxies in the universe (z= 10.7), is also in Camelopardalis.
Meteor showers
The annual May meteor shower Camelopardalids from comet 209P/LINEAR have a radiant in Camelopardalis.
History
Camelopardalis is not one of Ptolemy's 48 constellations in the Almagest. It was created by Petrus Plancius in 1613. It first appeared in a globe designed by him and produced by Pieter van den Keere. One year later, Jakob Bartsch featured it in his atlas. Johannes Hevelius depicted this constellation in his works which were so influential that it was referred to as Camelopardali Hevelii or abbreviated as Camelopard. Hevel.
Part of the constellation was hived off to form the constellation Sciurus Volans, the Flying Squirrel, by William Croswell in 1810. However this was not taken up by later cartographers.
Equivalents
In Chinese astronomy, the stars of Camelopardalis are located within a group of circumpolar stars called the Purple Forbidden Enclosure (紫微垣 Zǐ Wēi Yuán).
See also
Camelopardalis (Chinese astronomy)
References
Citations
References
External links
The Deep Photographic Guide to the Constellations: Camelopardalis
Star Tales – Camelopardalis
NASA – Voyager Interstellar Mission Characteristics
Northern constellations
Constellations listed by Petrus Plancius | Camelopardalis | [
"Astronomy"
] | 1,532 | [
"Camelopardalis",
"Constellations listed by Petrus Plancius",
"Constellations",
"Northern constellations"
] |
6,366 | https://en.wikipedia.org/wiki/Canis%20Major | Canis Major is a constellation in the southern celestial hemisphere. In the second century, it was included in Ptolemy's 48 constellations, and is counted among the 88 modern constellations. Its name is Latin for "greater dog" in contrast to Canis Minor, the "lesser dog"; both figures are commonly represented as following the constellation of Orion the hunter through the sky. The Milky Way passes through Canis Major and several open clusters lie within its borders, most notably M41.
Canis Major contains Sirius, the brightest star in the night sky, known as the "dog star". It is bright because of its proximity to the Solar System and its intrinsic brightness. In contrast, the other bright stars of the constellation are stars of great distance and high luminosity. At magnitude 1.5, Epsilon Canis Majoris (Adhara) is the second-brightest star of the constellation and the brightest source of extreme ultraviolet radiation in the night sky. Next in brightness are the yellow-white supergiant Delta (Wezen) at 1.8, the blue-white giant Beta (Mirzam) at 2.0, blue-white supergiants Eta (Aludra) at 2.4 and Omicron2 at 3.0, and white spectroscopic binary Zeta (Furud), also at 3.0. The red hypergiant VY CMa is one of the largest stars known, while the neutron star RX J0720.4-3125 has a radius of a mere 5 km.
History and mythology
In western astronomy
In ancient Mesopotamia, Sirius, named KAK.SI.SA2 by the Babylonians, was seen as an arrow aiming towards Orion, while the southern stars of Canis Major and a part of Puppis were viewed as a bow, named BAN in the Three Stars Each tablets, dating to around 1100 BC. In the later compendium of Babylonian astronomy and astrology titled MUL.APIN, the arrow, Sirius, was also linked with the warrior Ninurta, and the bow with Ishtar, daughter of Enlil. Ninurta was linked to the later deity Marduk, who was said to have slain the ocean goddess Tiamat with a great bow, and worshipped as the principal deity in Babylon. The Ancient Greeks replaced the bow and arrow depiction with that of a dog.
In Greek Mythology, Canis Major represented the dog Laelaps, a gift from Zeus to Europa; or sometimes the hound of Procris, Diana's nymph; or the one given by Aurora to Cephalus, so famed for its speed that Zeus elevated it to the sky. It was also considered to represent one of Orion's hunting dogs, pursuing Lepus the Hare or helping Orion fight Taurus the Bull; and is referred to in this way by Aratos, Homer and Hesiod. The ancient Greeks refer only to one dog, but by Roman times, Canis Minor appears as Orion's second dog. Alternative names include Canis Sequens and Canis Alter. Canis Syrius was the name used in the 1521 Alfonsine tables.
The Roman myth refers to Canis Major as Custos Europae, the dog guarding Europa but failing to prevent her abduction by Jupiter in the form of a bull, and as Janitor Lethaeus, "the watchdog". In medieval Arab astronomy, the constellation became al-Kalb al-Akbar, "the Greater Dog", transcribed as Alcheleb Alachbar by 17th century writer Edmund Chilmead. Islamic scholar Abū Rayḥān al-Bīrūnī referred to Orion as Kalb al-Jabbār, "the Dog of the Giant". Among the Merazig of Tunisia, shepherds note six constellations that mark the passage of the dry, hot season. One of them, called Merzem, includes the stars of Canis Major and Canis Minor and is the herald of two weeks of hot weather.
In non-western astronomy
In Chinese astronomy, the modern constellation of Canis Major is located in the Vermilion Bird (), where the stars were classified in several separate asterisms of stars. The Military Market () was a circular pattern of stars containing Nu3, Beta, Xi1 and Xi2, and some stars from Lepus. The Wild Cockerel () was at the centre of the Military Market, although it is uncertain which stars depicted what. Schlegel reported that the stars Omicron and Pi Canis Majoris might have been them, while Beta or Nu2 have also been proposed. Sirius was (), the Celestial Wolf, denoting invasion and plunder. Southeast of the Wolf was the asterism (), the celestial Bow and Arrow, which was interpreted as containing Delta, Epsilon, Eta and Kappa Canis Majoris and Delta Velorum. Alternatively, the arrow was depicted by Omicron2 and Eta and aiming at Sirius (the Wolf), while the bow comprised Kappa, Epsilon, Sigma, Delta and 164 Canis Majoris, and Pi and Omicron Puppis.
Both the Māori people and the people of the Tuamotus recognized the figure of Canis Major as a distinct entity, though it was sometimes absorbed into other constellations. , also called and , ("The Assembly of " or "The Assembly of Sirius") was a Māori constellation that included both Canis Minor and Canis Major, along with some surrounding stars. Related was , also called , the Mirror of , formed from an undefined group of stars in Canis Major. They called Sirius and , corresponding to two of the names for the constellation, though was a name applied to other stars in various Māori groups and other Polynesian cosmologies. The Tuamotu people called Canis Major , "the abiding assemblage of ".
The Tharumba people of the Shoalhaven River saw three stars of Canis Major as (Bat) and his two wives (Mrs Brown Snake) and (Mrs Black Snake); bored of following their husband around, the women try to bury him while he is hunting a wombat down its hole. He spears them and all three are placed in the sky as the constellation . To the Boorong people of Victoria, Sigma Canis Majoris was (which has become the official name of this star), and its flanking stars Delta and Epsilon were his two wives. The moon (, "native cat") sought to lure the further wife (Epsilon) away, but assaulted him and he has been wandering the sky ever since.
Characteristics
Canis Major is a constellation in the Southern Hemisphere's summer (or northern hemisphere's winter) sky, bordered by Monoceros (which lies between it and Canis Minor) to the north, Puppis to the east and southeast, Columba to the southwest, and Lepus to the west. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "CMa". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a quadrilateral; in the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −11.03° and −33.25°. Covering 380 square degrees or 0.921% of the sky, it ranks 43rd of the 88 currently-recognized constellations in size.
Features
Stars
Canis Major is a prominent constellation because of its many bright stars. These include Sirius (Alpha Canis Majoris), the brightest star in the night sky, as well as three other stars above magnitude 2.0. Furthermore, two other stars are thought to have previously outshone all others in the night sky—Adhara (Epsilon Canis Majoris) shone at −3.99 around 4.7 million years ago, and Mirzam (Beta Canis Majoris) peaked at −3.65 around 4.42 million years ago. Another, NR Canis Majoris, will be brightest at magnitude −0.88 in about 2.87 million years' time.
The German cartographer Johann Bayer used the Greek letters Alpha through Omicron to label the most prominent stars in the constellation, including three adjacent stars as Nu and two further pairs as Xi and Omicron, while subsequent observers designated further stars in the southern parts of the constellation that were hard to discern from Central Europe. Bayer's countryman Johann Elert Bode later added Sigma, Tau and Omega; the French astronomer Nicolas Louis de Lacaille added lettered stars a to k (though none are in use today). John Flamsteed numbered 31 stars, with 3 Canis Majoris being placed by Lacaille into Columba as Delta Columbae (Flamsteed had not recognised Columba as a distinct constellation). He also labelled two stars—his 10 and 13 Canis Majoris—as Kappa1 and Kappa2 respectively, but subsequent cartographers such as Francis Baily and John Bevis dropped the fainter former star, leaving Kappa2 as the sole Kappa. Flamsteed's listing of Nu1, Nu2, Nu3, Xi1, Xi2, Omicron1 and Omicron2 have all remained in use.
Sirius is the brightest star in the night sky at apparent magnitude −1.46 and one of the closest stars to Earth at a distance of 8.6 light-years. Its name comes from the Greek word for "scorching" or "searing". Sirius is also a binary star; its companion Sirius B is a white dwarf with a magnitude of 8.4–10,000 times fainter than Sirius A to observers on Earth. The two orbit each other every 50 years. Their closest approach last occurred in 1993 and they will be at their greatest separation between 2020 and 2025. Sirius was the basis for the ancient Egyptian calendar. The star marked the Great Dog's mouth on Bayer's star atlas.
Flanking Sirius are Beta and Gamma Canis Majoris. Also called Mirzam or Murzim, Beta is a blue-white Beta Cephei variable star of magnitude 2.0, which varies by a few hundredths of a magnitude over a period of six hours. Mirzam is 500 light-years from Earth, and its traditional name means "the announcer", referring to its position as the "announcer" of Sirius, as it rises a few minutes before Sirius does. Gamma, also known as Muliphein, is a fainter star of magnitude 4.12, in reality a blue-white bright giant of spectral type B8IIe located 441 light-years from earth. Iota Canis Majoris, lying between Sirius and Gamma, is another star that has been classified as a Beta Cephei variable, varying from magnitude 4.36 to 4.40 over a period of 1.92 hours. It is a remote blue-white supergiant star of spectral type B3Ib, around 46,000 times as luminous as the sun and, at 2500 light-years distant, 300 times further away than Sirius.
Epsilon, Omicron2, Delta, and Eta Canis Majoris were called Al Adzari "the virgins" in medieval Arabic tradition. Marking the dog's right thigh on Bayer's atlas is Epsilon Canis Majoris, also known as Adhara. At magnitude 1.5, it is the second-brightest star in Canis Major and the 23rd-brightest star in the sky. It is a blue-white supergiant of spectral type B2Iab, around 404 light-years from Earth. This star is one of the brightest known extreme ultraviolet sources in the sky. It is a binary star; the secondary is of magnitude 7.4. Its traditional name means "the virgins", having been transferred from the group of stars to Epsilon alone. Nearby is Delta Canis Majoris, also called Wezen. It is a yellow-white supergiant of spectral type F8Iab and magnitude 1.84, around 1605 light-years from Earth. With a traditional name meaning "the weight", Wezen is 17 times as massive and 50,000 times as luminous as the Sun. If located in the centre of the Solar System, it would extend out to Earth as its diameter is 200 times that of the Sun. Only around 10 million years old, Wezen has stopped fusing hydrogen in its core. Its outer envelope is beginning to expand and cool, and in the next 100,000 years it will become a red supergiant as its core fuses heavier and heavier elements. Once it has a core of iron, it will collapse and explode as a supernova. Nestled between Adhara and Wezen lies Sigma Canis Majoris, known as Unurgunite to the Boorong and Wotjobaluk people, a red supergiant of spectral type K7Ib that varies irregularly between magnitudes 3.43 and 3.51.
Also called Aludra, Eta Canis Majoris is a blue-white supergiant of spectral type B5Ia with a luminosity 176,000 times and diameter around 80 times that of the Sun. Classified as an Alpha Cygni type variable star, Aludra varies in brightness from magnitude 2.38 to 2.48 over a period of 4.7 days. It is located 1120 light-years away. To the west of Adhara lies 3.0-magnitude Zeta Canis Majoris or Furud, around 362 light-years distant from Earth. It is a spectroscopic binary, whose components orbit each other every 1.85 years, the combined spectrum indicating a main star of spectral type B2.5V.
Between these stars and Sirius lie Omicron1, Omicron2, and Pi Canis Majoris. Omicron2 is a massive supergiant star about 21 times as massive as the Sun. Only 7 million years old, it has exhausted the supply of hydrogen at its core and is now processing helium. It is an Alpha Cygni variable that undergoes periodic non-radial pulsations, which cause its brightness to cycle from magnitude 2.93 to 3.08 over a 24.44-day interval. Omicron1 is an orange K-type supergiant of spectral type K2.5Iab that is an irregular variable star, varying between apparent magnitudes 3.78 and 3.99. Around 18 times as massive as the Sun, it shines with 65,000 times its luminosity.
North of Sirius lie Theta and Mu Canis Majoris, Theta being the most northerly star with a Bayer designation in the constellation. Around 8 billion years old, it is an orange giant of spectral type K4III that is around as massive as the Sun but has expanded to 30 times the Sun's diameter. Mu is a multiple star system located around 1244 light-years distant, its components discernible in a small telescope as a 5.3-magnitude yellow-hued and 7.1-magnitude bluish star. The brighter star is a giant of spectral type K2III, while the companion is a main sequence star of spectral type B9.5V. Nu1 Canis Majoris is a yellow-hued giant star of magnitude 5.7, 278 light-years away; it is at the threshold of naked-eye visibility. It has a companion of magnitude 8.1.
At the southern limits of the constellation lie Kappa and Lambda Canis Majoris. Although of similar spectra and nearby each other as viewed from Earth, they are unrelated. Kappa is a Gamma Cassiopeiae variable of spectral type B2Vne, which brightened by 50% between 1963 and 1978, from magnitude 3.96 or so to 3.52. It is around 659 light-years distant. Lambda is a blue-white B-type main sequence dwarf with an apparent magnitude of 4.48 located around 423 light-years from Earth. It is 3.7 times as wide as and 5.5 times as massive as the Sun, and shines with 940 times its luminosity.
Canis Major is also home to many variable stars. EZ Canis Majoris is a Wolf–Rayet star of spectral type WN4 that varies between magnitudes 6.71 and 6.95 over a period of 3.766 days; the cause of its variability is unknown but thought to be related to its stellar wind and rotation. VY Canis Majoris is a remote red hypergiant located approximately 3,800 light-years away from Earth. It is one of largest stars known (sometimes described as the largest known) and is also one of the most luminous with a radius varying from 1,420 to 2,200 times the Sun's radius, and a luminosity around 300,000 times greater than the Sun. Its current mass is about 17 ± 8 solar masses, having shed material from an initial mass of 25–32 solar masses. VY CMa is also surrounded by a red reflection nebula that has been made by the material expelled by the strong stellar winds of its central star. W Canis Majoris is a type of red giant known as a carbon star—a semiregular variable, it ranges between magnitudes 6.27 and 7.09 over a period of 160 days. A cool star, it has a surface temperature of around 2,900 K and a radius 234 times that of the Sun, its distance estimated at 1,444–1,450 light-years from Earth. At the other extreme in size is RX J0720.4-3125, a neutron star with a radius of around 5 km. Exceedingly faint, it has an apparent magnitude of 26.6. Its spectrum and temperature appear to be mysteriously changing over several years. The nature of the changes are unclear, but it is possible they were caused by an event such as the star's absorption of an accretion disc.
Tau Canis Majoris is a Beta Lyrae-type eclipsing multiple star system that varies from magnitude 4.32 to 4.37 over 1.28 days. Its four main component stars are hot O-type stars, with a combined mass 80 times that of the Sun and shining with 500,000 times its luminosity, but little is known of their individual properties. A fifth component, a magnitude 10 star, lies at a distance of . The system is only 5 million years old. UW Canis Majoris is another Beta Lyrae-type star 3000 light-years from Earth; it is an eclipsing binary that ranges in magnitude from a minimum of 5.3 to a maximum of 4.8. It has a period of 4.4 days; its components are two massive hot blue stars, one a blue supergiant of spectral type O7.5–8 Iab, while its companion is a slightly cooler, less evolved and less luminous supergiant of spectral type O9.7Ib. The stars are 200,000 and 63,000 times as luminous as the Sun. However the fainter star is the more massive at 19 solar masses to the primary's 16. R Canis Majoris is another eclipsing binary that varies from magnitude 5.7 to 6.34 over 1.13 days, with a third star orbiting these two every 93 years. The shortness of the orbital period and the low ratio between the two main components make this an unusual Algol-type system.
Seven star systems have been found to have planets. Nu2 Canis Majoris is an ageing orange giant of spectral type K1III of apparent magnitude 3.91 located around 64 light-years distant. Around 1.5 times as massive and 11 times as luminous as the Sun, it is orbited over a period of 763 days by a planet 2.6 times as massive as Jupiter. HD 47536 is likewise an ageing orange giant found to have a planetary system—echoing the fate of the Solar System in a few billion years as the Sun ages and becomes a giant. Conversely, HD 45364 is a star 107 light-years distant that is a little smaller and cooler than the Sun, of spectral type G8V, which has two planets discovered in 2008. With orbital periods of 228 and 342 days, the planets have a 3:2 orbital resonance, which helps stabilise the system. HD 47186 is another sunlike star with two planets; the inner—HD 47186 b—takes four days to complete an orbit and has been classified as a Hot Neptune, while the outer—HD 47186 c—has an eccentric 3.7-year period orbit and has a similar mass to Saturn. HD 43197 is a sunlike star around 183 light-years distant that has two planets: a hot Jupiter-size planet with an eccentric orbit. The other planet, HD 43197 c, is another massive Jovian planet with a slightly oblong orbit outside of its habitable zone.
Z Canis Majoris is a star system a mere 300,000 years old composed of two pre-main-sequence stars—a FU Orionis star and a Herbig Ae/Be star, which has brightened episodically by two magnitudes to magnitude 8 in 1987, 2000, 2004 and 2008. The more massive Herbig Ae/Be star is enveloped in an irregular roughly spherical cocoon of dust that has an inner diameter of and outer diameter of . The cocoon has a hole in it through which light shines that covers an angle of 5 to 10 degrees of its circumference. Both stars are surrounded by a large envelope of in-falling material left over from the original cloud that formed the system. Both stars are emitting jets of material, that of the Herbig Ae/Be star being much larger—11.7 light-years long. Meanwhile, FS Canis Majoris is another star with infra-red emissions indicating a compact shell of dust, but it appears to be a main-sequence star that has absorbed material from a companion. These stars are thought to be significant contributors to interstellar dust.
Deep-sky objects
The band of the Milky Way goes through Canis Major, with only patchy obscurement by interstellar dust clouds. It is bright in the northeastern corner of the constellation, as well as in a triangular area between Adhara, Wezen and Aludra, with many stars visible in binoculars. Canis Major boasts several open clusters. The only Messier object is M41 (NGC 2287), an open cluster with a combined visual magnitude of 4.5, around 2300 light-years from Earth. Located 4 degrees south of Sirius, it contains contrasting blue, yellow and orange stars and covers an area the apparent size of the full moon—in reality around 25 light-years in diameter. Its most luminous stars have already evolved into giants. The brightest is a 6.3-magnitude star of spectral type K3. Located in the field is 12 Canis Majoris, though this star is only 670 light-years distant. NGC 2360, known as Caroline's Cluster after its discoverer Caroline Herschel, is an open cluster located 3.5 degrees west of Muliphein and has a combined apparent magnitude of 7.2. Around 15 light-years in diameter, it is located 3700 light-years away from Earth, and has been dated to around 2.2 billion years old. NGC 2362 is a small, compact open cluster, 5200 light-years from Earth. It contains about 60 stars, of which Tau Canis Majoris is the brightest member. Located around 3 degrees northeast of Wezen, it covers an area around 12 light-years in diameter, though the stars appear huddled around Tau when seen through binoculars. It is a very young open cluster as its member stars are only a few million years old. Lying 2 degrees southwest of NGC 2362 is NGC 2354 a fainter open cluster of magnitude 6.5, with around 15 member stars visible with binoculars. Located around 30' northeast of NGC 2360, NGC 2359 (Thor's Helmet or the Duck Nebula) is a relatively bright emission nebula in Canis Major, with an approximate magnitude of 10, which is 10,000 light-years from Earth. The nebula is shaped by HD 56925, an unstable Wolf–Rayet star embedded within it.
In 2003, an overdensity of stars in the region was announced to be the Canis Major Dwarf, the closest satellite galaxy to Earth. However, there remains debate over whether it represents a disrupted dwarf galaxy or in fact a variation in the thin and thick disk and spiral arm populations of the Milky Way. Investigation of the area yielded only ten RR Lyrae variables—consistent with the Milky Way's halo and thick disk populations rather than a separate dwarf spheroidal galaxy. On the other hand, a globular cluster in Puppis, NGC 2298—which appears to be part of the Canis Major dwarf system—is extremely metal-poor, suggesting it did not arise from the Milky Way's thick disk, and instead is of extragalactic origin.
NGC 2207 and IC 2163 are a pair of face-on interacting spiral galaxies located 125 million light-years from Earth. About 40 million years ago, the two galaxies had a close encounter and are now moving farther apart; nevertheless, the smaller IC 2163 will eventually be incorporated into NGC 2207. As the interaction continues, gas and dust will be perturbed, sparking extensive star formation in both galaxies. Supernovae have been observed in NGC 2207 in 1975 (type Ia SN 1975a), 1999 (the type Ib SN 1999ec), 2003 (type 1b supernova SN 2003H), and 2013 (type II supernova SN 2013ai). Located 16 million light-years distant, ESO 489-056 is an irregular dwarf- and low-surface-brightness galaxy that has one of the lowest metallicities known.
References
Citations
Bibliography
External links
The Deep Photographic Guide to the Constellations: Canis Major
The clickable Canis Major
Warburg Institute Iconographic Database (medieval and early modern images of Canis Major)
Constellations listed by Ptolemy
Southern constellations | Canis Major | [
"Astronomy"
] | 5,433 | [
"Constellations listed by Ptolemy",
"Canis Major",
"Southern constellations",
"Constellations"
] |
6,367 | https://en.wikipedia.org/wiki/Canis%20Minor | Canis Minor is a small constellation in the northern celestial hemisphere. In the second century, it was included as an asterism, or pattern, of two stars in Ptolemy's 48 constellations, and it is counted among the 88 modern constellations. Its name is Latin for "lesser dog", in contrast to Canis Major, the "greater dog"; both figures are commonly represented as following the constellation of Orion the hunter.
Canis Minor contains only two stars brighter than the fourth magnitude, Procyon (Alpha Canis Minoris), with a magnitude of 0.34, and Gomeisa (Beta Canis Minoris), with a magnitude of 2.9. The constellation's dimmer stars were noted by Johann Bayer, who named eight stars including Alpha and Beta, and John Flamsteed, who numbered fourteen. Procyon is the eighth-brightest star in the night sky, as well as one of the closest. A yellow-white main-sequence star, it has a white dwarf companion. Gomeisa is a blue-white main-sequence star. Luyten's Star is a ninth-magnitude red dwarf and the Solar System's next closest stellar neighbour in the constellation after Procyon. Additionally, Procyon and Luyten's Star are only 1.12 light-years away from each other, and Procyon would be the brightest star in Luyten's Star's sky. The fourth-magnitude HD 66141, which has evolved into an orange giant towards the end of its life cycle, was discovered to have a planet in 2012. There are two faint deep-sky objects within the constellation's borders. The 11 Canis-Minorids are a meteor shower that can be seen in early December.
History and mythology
Though strongly associated with the Classical Greek uranographic tradition, Canis Minor originates from ancient Mesopotamia. Procyon and Gomeisa were called MASH.TAB.BA or "twins" in the Three Stars Each tablets, dating to around 1100 BC. In the later MUL.APIN, this name was also applied to the pairs of Pi3 and Pi4 Orionis and Zeta and Xi Orionis. The meaning of MASH.TAB.BA evolved as well, becoming the twin deities Lulal and Latarak, who are on the opposite side of the sky from Papsukkal, the True Shepherd of Heaven in Babylonian mythology. Canis Minor was also given the name DAR.LUGAL, its position defined as "the star which stands behind it [Orion]", in the MUL.APIN; the constellation represents a rooster. This name may have also referred to the constellation Lepus. DAR.LUGAL was also denoted DAR.MUŠEN and DAR.LUGAL.MUŠEN in Babylonia. Canis Minor was then called tarlugallu in Akkadian astronomy.
Canis Minor was one of the original 48 constellations formulated by Ptolemy in his second-century Almagest, in which it was defined as a specific pattern (asterism) of stars; Ptolemy identified only two stars and hence no depiction was possible. The Ancient Greeks called the constellation προκυων/Procyon, "coming before the dog", transliterated into Latin as Antecanis, Praecanis, or variations thereof, by Cicero and others. Roman writers also appended the descriptors parvus, minor or minusculus ("small" or "lesser", for its faintness), septentrionalis ("northerly", for its position in relation to Canis Major), primus (rising "first") or sinister (rising to the "left") to its name Canis.
In Greek mythology, Canis Minor was sometimes connected with the Teumessian Fox, a beast turned into stone with its hunter, Laelaps, by Zeus, who placed them in heaven as Canis Major (Laelaps) and Canis Minor (Teumessian Fox). Eratosthenes accompanied the Little Dog with Orion, while Hyginus linked the constellation with Maera, a dog owned by Icarius of Athens. On discovering the latter's death, the dog and Icarius' daughter Erigone took their lives and all three were placed in the sky—Erigone as Virgo and Icarius as Boötes. As a reward for his faithfulness, the dog was placed along the "banks" of the Milky Way, which the ancients believed to be a heavenly river, where he would never suffer from thirst.
The medieval Arabic astronomers maintained the depiction of Canis Minor (al-Kalb al-Asghar in Arabic) as a dog; in his Book of the Fixed Stars, Abd al-Rahman al-Sufi included a diagram of the constellation with a canine figure superimposed. There was one slight difference between the Ptolemaic vision of Canis Minor and the Arabic; al-Sufi claims Mirzam, now assigned to Orion, as part of both Canis Minor—the collar of the dog—and its modern home. The Arabic names for both Procyon and Gomeisa alluded to their proximity and resemblance to Sirius, though they were not direct translations of the Greek; Procyon was called ash-Shi'ra ash-Shamiya, the "Syrian Sirius" and Gomeisa was called ash-Shira al-Ghamisa, the Sirius with bleary eyes. Among the Merazig of Tunisia, shepherds note six constellations that mark the passage of the dry, hot season. One of them, called Merzem, includes the stars of Canis Minor and Canis Major and is the herald of two weeks of hot weather.
The ancient Egyptians thought of this constellation as Anubis, the jackal god.
Alternative names have been proposed: Johann Bayer in the early 17th century termed the constellation Fovea "The Pit", and Morus "Sycamine Tree". Seventeenth-century German poet and author Philippus Caesius linked it to the dog of Tobias from the Apocrypha. Richard A. Proctor gave the constellation the name Felis "the Cat" in 1870 (contrasting with Canis Major, which he had abbreviated to Canis "the Dog"), explaining that he sought to shorten the constellation names to make them more manageable on celestial charts. Occasionally, Canis Minor is confused with Canis Major and given the name Canis Orionis ("Orion's Dog").
In non-Western astronomy
In Chinese astronomy, the stars corresponding to Canis Minor lie in the Vermilion Bird of the South (南方朱雀, Nán Fāng Zhū Què). Procyon, Gomeisa and Eta Canis Minoris form an asterism known as Nánhé, the Southern River. With its counterpart, the Northern River Beihe (Castor and Pollux), Nánhé was also associated with a gate or sentry. Along with Zeta and 8 Cancri, 6 Canis Minoris and 11 Canis Minoris formed the asterism Shuiwei, which literally means "water level". Combined with additional stars in Gemini, Shuiwei represented an official who managed floodwaters or a marker of the water level. Neighboring Korea recognized four stars in Canis Minor as part of a different constellation, "the position of the water". This constellation was located in the Red Bird, the southern portion of the sky.
Polynesian peoples often did not recognize Canis Minor as a constellation, but they saw Procyon as significant and often named it; in the Tuamotu Archipelago it was known as Hiro, meaning "twist as a thread of coconut fiber", and Kopu-nui-o-Hiro ("great paunch of Hiro"), which was either a name for the modern figure of Canis Minor or an alternative name for Procyon. Other names included Vena (after a goddess), on Mangaia and Puanga-hori (false Puanga, the name for Rigel), in New Zealand. In the Society Islands, Procyon was called Ana-tahua-vahine-o-toa-te-manava, literally "Aster the priestess of brave heart", figuratively the "pillar for elocution". The Wardaman people of the Northern Territory in Australia gave Procyon and Gomeisa the names Magum and Gurumana, describing them as humans who were transformed into gum trees in the dreamtime. Although their skin had turned to bark, they were able to speak with a human voice by rustling their leaves.
The Aztec calendar was related to their cosmology. The stars of Canis Minor were incorporated along with some stars of Orion and Gemini into an asterism associated with the day called "Water".
Characteristics
Lying directly south of Gemini's bright stars Castor and Pollux, Canis Minor is a small constellation bordered by Monoceros to the south, Gemini to the north, Cancer to the northeast, and Hydra to the east. It does not border Canis Major; Monoceros is in between the two. Covering 183 square degrees, Canis Minor ranks seventy-first of the 88 constellations in size. It appears prominently in the southern sky during the Northern Hemisphere's winter. The constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 14 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between and . Most visible in the evening sky from January to March, Canis Minor is most prominent at 10 p.m. during mid-February. It is then seen earlier in the evening until July, when it is only visible after sunset before setting itself, and rising in the morning sky before dawn. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "CMi".
Features
Stars
Canis Minor contains only two stars brighter than fourth magnitude. At magnitude 0.34, Procyon, or Alpha Canis Minoris, is the eighth-brightest star in the night sky, as well as one of the closest. Its name means "before the dog" or "preceding the dog" in Greek, as it rises an hour before the "Dog Star", Sirius, of Canis Major. It is a binary star system, consisting of a yellow-white main-sequence star of spectral type F5 IV-V, named Procyon A, and a faint white dwarf companion of spectral type DA, named Procyon B. Procyon B, which orbits the more massive star every 41 years, is of magnitude 10.7. Procyon A is 1.4 times the Sun's mass, while its smaller companion is 0.6 times as massive as the Sun. The system is from Earth, the shortest distance to a northern-hemisphere star of the first magnitude. Gomeisa, or Beta Canis Minoris, with a magnitude of 2.89, is the second-brightest star in Canis Minor. Lying from the Solar System, it is a blue-white main-sequence star of spectral class B8 Ve. Although fainter to Earth observers, it is much brighter than Procyon, and is 250 times as luminous and three times as massive as the Sun. Although its variations are slight, Gomeisa is classified as a shell star (Gamma Cassiopeiae variable), with a maximum magnitude of 2.84 and a minimum magnitude of 2.92. It is surrounded by a disk of gas which it heats and causes to emit radiation.
Johann Bayer used the Greek letters Alpha to Eta to label the most prominent eight stars in the constellation, designating two stars as Delta (named Delta1 and Delta2). John Flamsteed numbered fourteen stars, discerning a third star he named Delta3; his star 12 Canis Minoris was not found subsequently. In Bayer's 1603 work Uranometria, Procyon is located on the dog's belly, and Gomeisa on its neck. Gamma, Epsilon and Eta Canis Minoris lie nearby, marking the dog's neck, crown and chest, respectively. Although it has an apparent magnitude of 4.34, Gamma Canis Minoris is an orange K-type giant of spectral class K3-III C, which lies away. Its colour is obvious when seen through binoculars. It is a multiple system, consisting of the spectroscopic binary Gamma A and three optical companions, Gamma B, magnitude 13; Gamma C, magnitude 12; and Gamma D, magnitude 10. The two components of Gamma A orbit each other every 389.2 days, with an eccentric orbit that takes their separation between 2.3 and 1.4 astronomical units (AU). Epsilon Canis Minoris is a yellow bright giant of spectral class G6.5IIb of magnitude of 4.99. It lies from Earth, with 13 times the diameter and 750 times the luminosity of the Sun. Eta Canis Minoris is a giant of spectral class F0III of magnitude 5.24, which has a yellowish hue when viewed through binoculars as well as a faint companion of magnitude 11.1. Located 4 arcseconds from the primary, the companion star is actually around 440 AU from the main star and takes around 5,000 years to orbit it.
Near Procyon, three stars share the name Delta Canis Minoris. Delta1 is a yellow-white F-type giant of magnitude 5.25 located around from Earth. About 360 times as luminous and 3.75 times as massive as the Sun, it is expanding and cooling as it ages, having spent much of its life as a main sequence star of spectrum B6V. Also known as 8 Canis Minoris, Delta2 is an F-type main-sequence star of spectral type F2V and magnitude 5.59 which is distant. The last of the trio, Delta3 (also known as 9 Canis Minoris), is a white main sequence star of spectral type A0Vnn and magnitude 5.83 which is distant. These stars mark the paws of the Lesser Dog's left hind leg, while magnitude 5.13 Zeta marks the right. A blue-white bright giant of spectral type B8II, Zeta lies around away from the Solar System.
Lying 222 ± 7 light-years away with an apparent magnitude of 4.39, HD 66141 is 6.8 billion years old and has evolved into an orange giant of spectral type K2III with a diameter around 22 times that of the Sun, and weighing 1.1 solar masses. It is 174 times as luminous as the Sun, with an absolute magnitude of −0.15. HD 66141 was mistakenly named 13 Puppis, as its celestial coordinates were recorded incorrectly when catalogued and hence mistakenly thought to be in the constellation of Puppis; Bode gave it the name Lambda Canis Minoris, which is now obsolete. The orange giant is orbited by a planet, HD 66141b, which was detected in 2012 by measuring the star's radial velocity. The planet has a mass around 6 times that of Jupiter and a period of 480 days.
A red giant of spectral type M4III, BC Canis Minoris lies around distant from the Solar System. It is a semiregular variable star that varies between a maximum magnitude of 6.14 and minimum magnitude of 6.42. Periods of 27.7, 143.3 and 208.3 days have been recorded in its pulsations. AZ, AD and BI Canis Minoris are Delta Scuti variables—short period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study astroseismology. AZ is of spectral type A5IV, and ranges between magnitudes 6.44 and 6.51 over a period of 2.3 hours. AD has a spectral type of F2III, and has a maximum magnitude of 9.21 and minimum of 9.51, with a period of approximately 2.95 hours. BI is of spectral type F2 with an apparent magnitude varying around 9.19 and a period of approximately 2.91 hours.
At least three red giants are Mira variables in Canis Minor. S Canis Minoris, of spectral type M7e, is the brightest, ranging from magnitude 6.6 to 13.2 over a period of 332.94 days. V Canis Minoris ranges from magnitude 7.4 to 15.1 over a period of 366.1 days. Similar in magnitude is R Canis Minoris, which has a maximum of 7.3, but a significantly brighter minimum of 11.6. An S-type star, it has a period of 337.8 days.
YZ Canis Minoris is a red dwarf of spectral type M4.5V and magnitude 11.2, roughly three times the size of Jupiter and from Earth. It is a flare star, emitting unpredictable outbursts of energy for mere minutes, which might be much more powerful analogues of solar flares. Luyten's Star (GJ 273) is a red dwarf star of spectral type M3.5V and close neighbour of the Solar System. Its visual magnitude of 9.9 renders it too faint to be seen with the naked eye, even though it is only away. Fainter still is PSS 544-7, an eighteenth-magnitude red dwarf around 20 per cent the mass of the Sun, located from Earth. First noticed in 1991, it is thought to be a cannonball star, shot out of a star cluster and now moving rapidly through space directly away from the galactic disc.
The WZ Sagittae-type dwarf nova DY Canis Minoris (also known as VSX J074727.6+065050) flared up to magnitude 11.4 over January and February 2008 before dropping eight magnitudes to around 19.5 over approximately 80 days. It is a remote binary star system where a white dwarf and low-mass star orbit each other close enough for the former star to draw material off the latter and form an accretion disc. This material builds up until it erupts dramatically.
Deep-sky objects
The Milky Way passes through much of Canis Minor, yet it has few deep-sky objects. William Herschel recorded four objects in his 1786 work Catalogue of Nebulae and Clusters of Stars, including two he mistakenly believed were star clusters. NGC 2459 is a group of five thirteenth- and fourteenth-magnitude stars that appear to lie close together in the sky but are not related. A similar situation has occurred with NGC 2394, also in Canis Minor. This is a collection of fifteen unrelated stars of ninth magnitude and fainter.
Herschel also observed three faint galaxies, two of which are interacting with each other. NGC 2508 is a lenticular galaxy of thirteenth magnitude, estimated at 205 million light-years' distance (63 million parsecs) with a diameter of . Named as a single object by Herschel, NGC 2402 is actually a pair of near-adjacent galaxies that appear to be interacting with each other. Only of fourteenth and fifteenth magnitudes, respectively, the elliptical and spiral galaxy are thought to be approximately 245 million light-years distant, and each measure 55,000 light-years in diameter.
Meteor showers
The 11 Canis-Minorids, also called the Beta Canis Minorids, are a meteor shower that arise near the fifth-magnitude star 11 Canis Minoris and were discovered in 1964 by Keith Hindley, who investigated their trajectory and proposed a common origin with the comet D/1917 F1 Mellish. However, this conclusion has been refuted subsequently as the number of orbits analysed was low and their trajectories too disparate to confirm a link. They last from 4 to 15 December, peaking over 10 and 11 December.
References
Sources
External links
The Deep Photographic Guide to the Constellations: Canis Minor
Warburg Institute Iconographic Database (medieval and early modern images of Canis Minor)
Constellations listed by Ptolemy
Equatorial constellations
Constellations | Canis Minor | [
"Astronomy"
] | 4,187 | [
"Constellations listed by Ptolemy",
"Canis Minor",
"Constellations",
"Sky regions",
"Equatorial constellations"
] |
6,371 | https://en.wikipedia.org/wiki/Centaurus | Centaurus is a bright constellation in the southern sky. One of the largest constellations, Centaurus was included among the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations. In Greek mythology, Centaurus represents a centaur; a creature that is half human, half horse (another constellation named after a centaur is one from the zodiac: Sagittarius). Notable stars include Alpha Centauri, the nearest star system to the Solar System, its neighbour in the sky Beta Centauri, and HR 5171, one of the largest stars yet discovered. The constellation also contains Omega Centauri, the brightest globular cluster as visible from Earth and the largest identified in the Milky Way, possibly a remnant of a dwarf galaxy.
Notable features
Stars
Centaurus contains several very bright stars. Its alpha and beta stars are used as "pointer stars" to help observers find the constellation Crux. Centaurus has 281 stars above magnitude 6.5, meaning that they are visible to the unaided eye, the most of any constellation. Alpha Centauri, the closest star system to the Sun, has a high proper motion; it will be a mere half-degree from Beta Centauri in approximately 4000 years.
Alpha Centauri is a triple star system composed of a binary system orbited by Proxima Centauri, currently the nearest star to the Sun. Traditionally called Rigil Kentaurus (from Arabic رجل قنطورس, meaning "foot of the centaur") or Toliman (from Arabic الظليمين meaning "two male ostriches"), the system has an overall magnitude of −0.28 and is 4.4 light-years from Earth. The primary and secondary are both yellow-hued stars; the first is of magnitude −0.01 and the second: 1.35. Proxima, the tertiary star, is a red dwarf of magnitude 11.0; it appears almost 2 degrees away from the close pairing of Alpha and has a period of approximately one million years. Also a flare star, Proxima has minutes-long outbursts where it brightens by over a magnitude. The Alpha couple revolve in 80-year periodicity and will next appear closest as seen from Earth's telescopes in 2037 and 2038, together as they appear to the naked eye they present the third-brightest "star" in the night sky.
One other first magnitude star Beta Centauri is in the constellation in a position beyond Proxima and toward the narrow axis of Crux, thus with Alpha forming a far-south limb of the constellation. Also called Hadar and Agena, it is a double star; the primary is a blue-hued giant star of magnitude 0.6, 525 light-years from Earth. The secondary is of magnitude 4.0 and has a modest separation, appearing only under intense magnification due to its distance.
The northerly star Theta Centauri, officially named Menkent, is an orange giant star of magnitude 2.06. It is the only bright star of Centaurus that is easily visible from mid-northern latitudes.
The next bright object is Gamma Centauri, a binary star which appears to the naked eye at magnitude 2.2. The primary and secondary are both blue-white hued stars of magnitude 2.9; their period is 84 years.
Centaurus also has many dimmer double stars and binary stars. 3 Centauri is a double star with a blue-white hued primary of magnitude 4.5 and a secondary of magnitude 6.0. The primary is 344 light-years away.
Centaurus is home to many variable stars. R Centauri is a Mira variable star with a minimum magnitude of 11.8 and a maximum magnitude of 5.3; it is about 1,250 light-years from Earth and has a period of 18 months. V810 Centauri is a semiregular variable.
BPM 37093 is a white dwarf star whose carbon atoms are thought to have formed a crystalline structure. Since diamond also consists of carbon arranged in a crystalline lattice (though of a different configuration), scientists have nicknamed this star "Lucy" after the Beatles song "Lucy in the Sky with Diamonds."
PDS 70, (V1032 Centauri) a low mass T Tauri star is found in the constellation Centaurus. In July 2018 astronomers captured the first conclusive image of a protoplanetary disk containing a nascent exoplanet, named PDS 70b.
Deep-sky objects
ω Centauri (NGC 5139), despite being listed as the constellation's "omega" star, is in fact a naked-eye globular cluster, 17,000 light-years away with a diameter of 150 light-years. It is the largest and brightest globular cluster in the Milky Way; at ten times the size of the next-largest cluster, it has a magnitude of 3.7. It is also the most luminous globular cluster in the Milky Way, at over one million solar luminosities. Omega Centauri is classified as a Shapley class VIII cluster, which means that its center is loosely concentrated. It is also one of two only globular clusters to be designated with a Bayer letter; the globular cluster 47 Tucanae (Xi Tucanae) is the only one designated with a Flamsteed number. It contains several million stars, most of which are yellow dwarf stars, but also possesses red giants and blue-white stars; the stars have an average age of 12 billion years. This has prompted suspicion that Omega Centauri was the core of a dwarf galaxy that had been absorbed by the Milky Way. Omega Centauri was determined to be nonstellar in 1677 by the English astronomer Edmond Halley, though it was visible as a star to the ancients. Its status as a globular cluster was determined by James Dunlop in 1827. To the unaided eye, Omega Centauri appears fuzzy and is obviously non-circular; it is approximately half a degree in diameter, the same size as the full Moon.
Centaurus is also home to open clusters. NGC 3766 is an open cluster 6,300 light-years from Earth that is visible to the unaided eye. It contains approximately 100 stars, the brightest of which are 7th magnitude. NGC 5460 is another naked-eye open cluster, 2,300 light-years from Earth, that has an overall magnitude of 6 and contains approximately 40 stars.
There is one bright planetary nebula in Centaurus, NGC 3918, also known as the Blue Planetary. It has an overall magnitude of 8.0 and a central star of magnitude 11.0; it is 2600 light-years from Earth. The Blue Planetary was discovered by John Herschel and named for its color's similarity to Uranus, though the nebula is apparently three times larger than the planet.
Centaurus is rich in galaxies as well. NGC 4622 is a face-on spiral galaxy located 200 million light-years from Earth (redshift 0.0146). Its spiral arms wind in both directions, which makes it nearly impossible for astronomers to determine the rotation of the galaxy. Astronomers theorize that a collision with a smaller companion galaxy near the core of the main galaxy could have led to the unusual spiral structure. NGC 5253, a peculiar irregular galaxy, is located near the border with Hydra and M83, with which it likely had a close gravitational interaction 1–2 billion years ago. This may have sparked the galaxy's high rate of star formation, which continues today and contributes to its high surface brightness. NGC 5253 includes a large nebula and at least 12 large star clusters. In the eyepiece, it is a small galaxy of magnitude 10 with dimensions of 5 arcminutes by 2 arcminutes and a bright nucleus. NGC 4945 is a spiral galaxy seen edge-on from Earth, 13 million light-years away. It is visible with any amateur telescope, as well as binoculars under good conditions; it has been described as "shaped like a candle flame", being long and thin (16' by 3'). In the eyepiece of a large telescope, its southeastern dust lane becomes visible. Another galaxy is NGC 5102, found by star-hopping from Iota Centauri. In the eyepiece, it appears as an elliptical object 9 arcminutes by 2.5 arcminutes tilted on a southwest–northeast axis.
One of the closest active galaxies to Earth is the Centaurus A galaxy, NGC 5128, at 11 million light-years away (redshift 0.00183). It has a supermassive black hole at its core, which expels massive jets of matter that emit radio waves due to synchrotron radiation. Astronomers posit that its dust lanes, not common in elliptical galaxies, are due to a previous merger with another galaxy, probably a spiral galaxy. NGC 5128 appears in the optical spectrum as a fairly large elliptical galaxy with a prominent dust lane. Its overall magnitude is 7.0 and it has been seen under perfect conditions with the naked eye, making it one of the most distant objects visible to the unaided observer. In equatorial and southern latitudes, it is easily found by star hopping from Omega Centauri. In small telescopes, the dust lane is not visible; it begins to appear with about 4 inches of aperture under good conditions. In large amateur instruments, above about 12 inches in aperture, the dust lane's west-northwest to east-southeast direction is easily discerned. Another dim dust lane on the east side of the 12-arcminute-by-15-arcminute galaxy is also visible. ESO 270-17, also called the Fourcade-Figueroa Object, is a low-surface brightness object believed to be the remnants of a galaxy; it does not have a core and is very difficult to observe with an amateur telescope. It measures 7 arcminutes by 1 arcminute. It likely originated as a spiral galaxy and underwent a catastrophic gravitational interaction with Centaurus A around 500 million years ago, stopping its rotation and destroying its structure.
NGC 4650A is a polar-ring galaxy 136 million light-years from Earth (redshift 0.01). It has a central core made of older stars that resembles an elliptical galaxy, and an outer ring of young stars that orbits around the core. The plane of the outer ring is distorted, which suggests that NGC 4650A is the result of a galaxy collision about a billion years ago. This galaxy has also been cited in studies of dark matter, because the stars in the outer ring orbit too quickly for their collective mass. This suggests that the galaxy is surrounded by a dark matter halo, which provides the necessary mass.
One of the closest galaxy clusters to Earth is the Centaurus Cluster at 160 million light-years away, having redshift 0.0114. It has a cooler, denser central region of gas and a hotter, more diffuse outer region. The intracluster medium in the Centaurus Cluster has a high concentration of metals (elements heavier than helium) due to a large number of supernovae. This cluster also possesses a plume of gas whose origin is unknown.
History
While Centaurus now has a high southern latitude, at the dawn of civilization it was an equatorial constellation. Precession has been slowly shifting it southward for millennia, and it is now close to its maximal southern declination. In a little over 7000 years it will be at maximum visibility for those in the northern hemisphere, visible at times in the year up to quite a high northern latitude.
The figure of Centaurus can be traced back to a Babylonian constellation known as the Bison-man (MUL.GUD.ALIM). This being was depicted in two major forms: firstly, as a 4-legged bison with a human head, and secondly, as a being with a man's head and torso attached to the rear legs and tail of a bull or bison. It has been closely associated with the Sun god Utu-Shamash from very early times.
The Greeks depicted the constellation as a centaur and gave it its current name. It was mentioned by Eudoxus in the 4th century BC and Aratus in the 3rd century BC. In the 2nd century AD, Claudius Ptolemy catalogued 37 stars in Centaurus, including Alpha Centauri. Large as it is now, in earlier times it was even larger, as the constellation Lupus was treated as an asterism within Centaurus, portrayed in illustrations as an unspecified animal either in the centaur's grasp or impaled on its spear. The Southern Cross, which is now regarded as a separate constellation, was treated by the ancients as a mere asterism formed of the stars composing the centaur's legs. Additionally, what is now the minor constellation Circinus was treated as undefined stars under the centaur's front hooves.
According to the Roman poet Ovid (Fasti v.379), the constellation honors the centaur Chiron, who was tutor to many of the earlier Greek heroes including Heracles (Hercules), Theseus, and Jason, the leader of the Argonauts. It is not to be confused with the more warlike centaur represented by the zodiacal constellation Sagittarius. The legend associated with Chiron says that he was accidentally poisoned with an arrow shot by Hercules, and was subsequently placed in the heavens.
Equivalents
In Chinese astronomy, the stars of Centaurus are found in three areas: the Azure Dragon of the East (東方青龍, Dōng Fāng Qīng Lóng), the Vermillion Bird of the South (南方朱雀, Nán Fāng Zhū Què), and the Southern Asterisms (近南極星區, Jìnnánjíxīngōu). Not all of the stars of Centaurus can be seen from China, and the unseen stars were classified among the Southern Asterisms by Xu Guangqi, based on his study of western star charts. However, most of the brightest stars of Centaurus, including α Centauri, θ Centauri (or Menkent), ε Centauri and η Centauri, can be seen in the Chinese sky.
Some Polynesian peoples considered the stars of Centaurus to be a constellation as well. On Pukapuka, Centaurus had two names: Na Mata-o-te-tokolua and Na Lua-mata-o-Wua-ma-Velo. In Tonga, the constellation was called by four names: O-nga-tangata, Tautanga-ufi, Mamangi-Halahu, and Mau-kuo-mau. Alpha and Beta Centauri were not named specifically by the people of Pukapuka or Tonga, but they were named by the people of Hawaii and the Tuamotus. In Hawaii, the name for Alpha Centauri was either Melemele or Ka Maile-hope and the name for Beta Centauri was either Polapola or Ka Maile-mua. In the Tuamotu islands, Alpha was called Na Kuhi and Beta was called Tere.
The Pointer (α Centauri and β Centauri) is one of the asterisms used by Bugis sailors for navigation, called bintoéng balué, meaning "the widowed-before-marriage". It is also called bintoéng sallatang meaning "southern star".
Namesakes
Two United States Navy ships, and , were named after Centaurus, the constellation.
See also
Centaurus (Chinese astronomy)
List of brightest stars
References
Citations
References
Centaurus, by Chris Dolan
C.S. Constellations and Stars
Constellations, by Richard Dibon-Smith
US edition by Princeton University Press, Princeton. .
External links
The Deep Photographic Guide to the Constellations: Centaurus
Starry Night Photography: Centaurus
Ian Ridpath's Star Tales – Centaurus
Warburg Institute Iconographic Database (medieval and early modern images of Centaurus)
Constellations
Southern constellations
Constellations listed by Ptolemy | Centaurus | [
"Astronomy"
] | 3,361 | [
"Constellations listed by Ptolemy",
"Centaurus",
"Southern constellations",
"Constellations",
"Sky regions"
] |
6,416 | https://en.wikipedia.org/wiki/Impact%20crater | An impact crater is a depression in the surface of a solid astronomical body formed by the hypervelocity impact of a smaller object. In contrast to volcanic craters, which result from explosion or internal collapse, impact craters typically have raised rims and floors that are lower in elevation than the surrounding terrain. Impact craters are typically circular, though they can be elliptical in shape or even irregular due to events such as landslides. Impact craters range in size from microscopic craters seen on lunar rocks returned by the Apollo Program to simple bowl-shaped depressions and vast, complex, multi-ringed impact basins. Meteor Crater is a well-known example of a small impact crater on Earth.
Impact craters are the dominant geographic features on many solid Solar System objects including the Moon, Mercury, Callisto, Ganymede, and most small moons and asteroids. On other planets and moons that experience more active surface geological processes, such as Earth, Venus, Europa, Io, Titan, and Triton, visible impact craters are less common because they become eroded, buried, or transformed by tectonic and volcanic processes over time. Where such processes have destroyed most of the original crater topography, the terms impact structure or astrobleme are more commonly used. In early literature, before the significance of impact cratering was widely recognised, the terms cryptoexplosion or cryptovolcanic structure were often used to describe what are now recognised as impact-related features on Earth.
The cratering records of very old surfaces, such as Mercury, the Moon, and the southern highlands of Mars, record a period of intense early bombardment in the inner Solar System around 3.9 billion years ago. The rate of crater production on Earth has since been considerably lower, but it is appreciable nonetheless. Earth experiences, on average, from one to three impacts large enough to produce a crater every million years. This indicates that there should be far more relatively young craters on the planet than have been discovered so far. The cratering rate in the inner solar system fluctuates as a consequence of collisions in the asteroid belt that create a family of fragments that are often sent cascading into the inner solar system. Formed in a collision 80 million years ago, the Baptistina family of asteroids is thought to have caused a large spike in the impact rate. The rate of impact cratering in the outer Solar System could be different from the inner Solar System.
Although Earth's active surface processes quickly destroy the impact record, about 190 terrestrial impact craters have been identified. These range in diameter from a few tens of meters up to about , and they range in age from recent times (e.g. the Sikhote-Alin craters in Russia whose creation was witnessed in 1947) to more than two billion years, though most are less than 500 million years old because geological processes tend to obliterate older craters. They are also selectively found in the stable interior regions of continents. Few undersea craters have been discovered because of the difficulty of surveying the sea floor, the rapid rate of change of the ocean bottom, and the subduction of the ocean floor into Earth's interior by processes of plate tectonics.
History
Daniel M. Barringer, a mining engineer, was convinced already in 1903 that the crater he owned, Meteor Crater, was of cosmic origin. Most geologists at the time assumed it formed as the result of a volcanic steam eruption.
In the 1920s, the American geologist Walter H. Bucher studied a number of sites now recognized as impact craters in the United States. He concluded they had been created by some great explosive event, but believed that this force was probably volcanic in origin. However, in 1936, the geologists John D. Boon and Claude C. Albritton Jr. revisited Bucher's studies and concluded that the craters that he studied were probably formed by impacts.
Grove Karl Gilbert suggested in 1893 that the Moon's craters were formed by large asteroid impacts. Ralph Baldwin in 1949 wrote that the Moon's craters were mostly of impact origin. Around 1960, Gene Shoemaker revived the idea. According to David H. Levy, Shoemaker "saw the craters on the Moon as logical impact sites that were formed not gradually, in eons, but explosively, in seconds." For his PhD degree at Princeton University (1960), under the guidance of Harry Hammond Hess, Shoemaker studied the impact dynamics of Meteor Crater. Shoemaker noted that Meteor Crater had the same form and structure as two explosion craters created from atomic bomb tests at the Nevada Test Site, notably Jangle U in 1951 and Teapot Ess in 1955. In 1960, Edward C. T. Chao and Shoemaker identified coesite (a form of silicon dioxide) at Meteor Crater, proving the crater was formed from an impact generating extremely high temperatures and pressures. They followed this discovery with the identification of coesite within suevite at Nördlinger Ries, proving its impact origin.
Armed with the knowledge of shock-metamorphic features, Carlyle S. Beals and colleagues at the Dominion Astrophysical Observatory in Victoria, British Columbia, Canada and Wolf von Engelhardt of the University of Tübingen in Germany began a methodical search for impact craters. By 1970, they had tentatively identified more than 50. Although their work was controversial, the American Apollo Moon landings, which were in progress at the time, provided supportive evidence by recognizing the rate of impact cratering on the Moon. Because the processes of erosion on the Moon are minimal, craters persist. Since the Earth could be expected to have roughly the same cratering rate as the Moon, it became clear that the Earth had suffered far more impacts than could be seen by counting evident craters.
Crater formation
Impact cratering involves high velocity collisions between solid objects, typically much greater than the speed of sound in those objects. Such hyper-velocity impacts produce physical effects such as melting and vaporization that do not occur in familiar sub-sonic collisions. On Earth, ignoring the slowing effects of travel through the atmosphere, the lowest impact velocity with an object from space is equal to the gravitational escape velocity of about 11 km/s. The fastest impacts occur at about 72 km/s in the "worst case" scenario in which an object in a retrograde near-parabolic orbit hits Earth. The median impact velocity on Earth is about 20 km/s.
However, the slowing effects of travel through the atmosphere rapidly decelerate any potential impactor, especially in the lowest 12 kilometres where 90% of the Earth's atmospheric mass lies. Meteors of up to 7,000 kg lose all their cosmic velocity due to atmospheric drag at a certain altitude (retardation point), and start to accelerate again due to Earth's gravity until the body reaches its terminal velocity of 0.09 to 0.16 km/s. The larger the meteoroid (i.e. asteroids and comets) the more of its initial cosmic velocity it preserves. While an object of 9,000 kg maintains about 6% of its original velocity, one of 900,000 kg already preserves about 70%. Extremely large bodies (about 100,000 tonnes) are not slowed by the atmosphere at all, and impact with their initial cosmic velocity if no prior disintegration occurs.
Impacts at these high speeds produce shock waves in solid materials, and both impactor and the material impacted are rapidly compressed to high density. Following initial compression, the high-density, over-compressed region rapidly depressurizes, exploding violently, to set in train the sequence of events that produces the impact crater. Impact-crater formation is therefore more closely analogous to cratering by high explosives than by mechanical displacement. Indeed, the energy density of some material involved in the formation of impact craters is many times higher than that generated by high explosives. Since craters are caused by explosions, they are nearly always circular – only very low-angle impacts cause significantly elliptical craters.
This describes impacts on solid surfaces. Impacts on porous surfaces, such as that of Hyperion, may produce internal compression without ejecta, punching a hole in the surface without filling in nearby craters. This may explain the 'sponge-like' appearance of that moon.
It is convenient to divide the impact process conceptually into three distinct stages: (1) initial contact and compression, (2) excavation, (3) modification and collapse. In practice, there is overlap between the three processes with, for example, the excavation of the crater continuing in some regions while modification and collapse is already underway in others.
Contact and compression
In the absence of atmosphere, the impact process begins when the impactor first touches the target surface. This contact accelerates the target and decelerates the impactor. Because the impactor is moving so rapidly, the rear of the object moves a significant distance during the short-but-finite time taken for the deceleration to propagate across the impactor. As a result, the impactor is compressed, its density rises, and the pressure within it increases dramatically. Peak pressures in large impacts exceed 1 T Pa to reach values more usually found deep in the interiors of planets, or generated artificially in nuclear explosions.
In physical terms, a shock wave originates from the point of contact. As this shock wave expands, it decelerates and compresses the impactor, and it accelerates and compresses the target. Stress levels within the shock wave far exceed the strength of solid materials; consequently, both the impactor and the target close to the impact site are irreversibly damaged. Many crystalline minerals can be transformed into higher-density phases by shock waves; for example, the common mineral quartz can be transformed into the higher-pressure forms coesite and stishovite. Many other shock-related changes take place within both impactor and target as the shock wave passes through, and some of these changes can be used as diagnostic tools to determine whether particular geological features were produced by impact cratering.
As the shock wave decays, the shocked region decompresses towards more usual pressures and densities. The damage produced by the shock wave raises the temperature of the material. In all but the smallest impacts this increase in temperature is sufficient to melt the impactor, and in larger impacts to vaporize most of it and to melt large volumes of the target. As well as being heated, the target near the impact is accelerated by the shock wave, and it continues moving away from the impact behind the decaying shock wave.
Excavation
Contact, compression, decompression, and the passage of the shock wave all occur within a few tenths of a second for a large impact. The subsequent excavation of the crater occurs more slowly, and during this stage the flow of material is largely subsonic. During excavation, the crater grows as the accelerated target material moves away from the point of impact. The target's motion is initially downwards and outwards, but it becomes outwards and upwards. The flow initially produces an approximately hemispherical cavity that continues to grow, eventually producing a paraboloid (bowl-shaped) crater in which the centre has been pushed down, a significant volume of material has been ejected, and a topographically elevated crater rim has been pushed up. When this cavity has reached its maximum size, it is called the transient cavity.
The depth of the transient cavity is typically a quarter to a third of its diameter. Ejecta thrown out of the crater do not include material excavated from the full depth of the transient cavity; typically the depth of maximum excavation is only about a third of the total depth. As a result, about one third of the volume of the transient crater is formed by the ejection of material, and the remaining two thirds is formed by the displacement of material downwards, outwards and upwards, to form the elevated rim. For impacts into highly porous materials, a significant crater volume may also be formed by the permanent compaction of the pore space. Such compaction craters may be important on many asteroids, comets and small moons.
In large impacts, as well as material displaced and ejected to form the crater, significant volumes of target material may be melted and vaporized together with the original impactor. Some of this impact melt rock may be ejected, but most of it remains within the transient crater, initially forming a layer of impact melt coating the interior of the transient cavity. In contrast, the hot dense vaporized material expands rapidly out of the growing cavity, carrying some solid and molten material within it as it does so. As this hot vapor cloud expands, it rises and cools much like the archetypal mushroom cloud generated by large nuclear explosions. In large impacts, the expanding vapor cloud may rise to many times the scale height of the atmosphere, effectively expanding into free space.
Most material ejected from the crater is deposited within a few crater radii, but a small fraction may travel large distances at high velocity, and in large impacts it may exceed escape velocity and leave the impacted planet or moon entirely. The majority of the fastest material is ejected from close to the center of impact, and the slowest material is ejected close to the rim at low velocities to form an overturned coherent flap of ejecta immediately outside the rim. As ejecta escapes from the growing crater, it forms an expanding curtain in the shape of an inverted cone. The trajectory of individual particles within the curtain is thought to be largely ballistic.
Small volumes of un-melted and relatively un-shocked material may be spalled at very high relative velocities from the surface of the target and from the rear of the impactor. Spalling provides a potential mechanism whereby material may be ejected into inter-planetary space largely undamaged, and whereby small volumes of the impactor may be preserved undamaged even in large impacts. Small volumes of high-speed material may also be generated early in the impact by jetting. This occurs when two surfaces converge rapidly and obliquely at a small angle, and high-temperature highly shocked material is expelled from the convergence zone with velocities that may be several times larger than the impact velocity.
Modification and collapse
In most circumstances, the transient cavity is not stable and collapses under gravity. In small craters, less than about 4 km diameter on Earth, there is some limited collapse of the crater rim coupled with debris sliding down the crater walls and drainage of impact melts into the deeper cavity. The resultant structure is called a simple crater, and it remains bowl-shaped and superficially similar to the transient crater. In simple craters, the original excavation cavity is overlain by a lens of collapse breccia, ejecta and melt rock, and a portion of the central crater floor may sometimes be flat.
Above a certain threshold size, which varies with planetary gravity, the collapse and modification of the transient cavity is much more extensive, and the resulting structure is called a complex crater. The collapse of the transient cavity is driven by gravity, and involves both the uplift of the central region and the inward collapse of the rim. The central uplift is not the result of elastic rebound, which is a process in which a material with elastic strength attempts to return to its original geometry; rather the collapse is a process in which a material with little or no strength attempts to return to a state of gravitational equilibrium.
Complex craters have uplifted centers, and they have typically broad flat shallow crater floors, and terraced walls. At the largest sizes, one or more exterior or interior rings may appear, and the structure may be labeled an impact basin rather than an impact crater. Complex-crater morphology on rocky planets appears to follow a regular sequence with increasing size: small complex craters with a central topographic peak are called central peak craters, for example Tycho; intermediate-sized craters, in which the central peak is replaced by a ring of peaks, are called peak-ring craters, for example Schrödinger; and the largest craters contain multiple concentric topographic rings, and are called multi-ringed basins, for example Orientale. On icy (as opposed to rocky) bodies, other morphological forms appear that may have central pits rather than central peaks, and at the largest sizes may contain many concentric rings. Valhalla on Callisto is an example of this type.
Subsequent modification
Long after an impact event, a crater may be further modified by erosion, mass wasting processes, viscous relaxation, or erased entirely. These effects are most prominent on geologically and meteorologically active bodies such as Earth, Titan, Triton, and Io. However, heavily modified craters may be found on more primordial bodies such as Callisto, where many ancient craters flatten into bright ghost craters, or palimpsests.
Identifying impact craters
Non-explosive volcanic craters can usually be distinguished from impact craters by their irregular shape and the association of volcanic flows and other volcanic materials. Impact craters produce melted rocks as well, but usually in smaller volumes with different characteristics.
The distinctive mark of an impact crater is the presence of rock that has undergone shock-metamorphic effects, such as shatter cones, melted rocks, and crystal deformations. The problem is that these materials tend to be deeply buried, at least for simple craters. They tend to be revealed in the uplifted center of a complex crater, however.
Impacts produce distinctive shock-metamorphic effects that allow impact sites to be distinctively identified. Such shock-metamorphic effects can include:
A layer of shattered or "brecciated" rock under the floor of the crater. This layer is called a "breccia lens".
Shatter cones, which are chevron-shaped impressions in rocks. Such cones are formed most easily in fine-grained rocks.
High-temperature rock types, including laminated and welded blocks of sand, spherulites and tektites, or glassy spatters of molten rock. The impact origin of tektites has been questioned by some researchers; they have observed some volcanic features in tektites not found in impactites. Tektites are also drier (contain less water) than typical impactites. While rocks melted by the impact resemble volcanic rocks, they incorporate unmelted fragments of bedrock, form unusually large and unbroken fields, and have a much more mixed chemical composition than volcanic materials spewed up from within the Earth. They also may have relatively large amounts of trace elements that are associated with meteorites, such as nickel, platinum, iridium, and cobalt. Note: scientific literature has reported that some "shock" features, such as small shatter cones, which are often associated only with impact events, have been found also in terrestrial volcanic ejecta.
Microscopic pressure deformations of minerals. These include fracture patterns in crystals of quartz and feldspar, and formation of high-pressure materials such as diamond, derived from graphite and other carbon compounds, or stishovite and coesite, varieties of shocked quartz.
Buried craters, such as the Decorah crater, can be identified through drill coring, aerial electromagnetic resistivity imaging, and airborne gravity gradiometry.
Economic importance
On Earth, impact craters have resulted in useful minerals. Some of the ores produced from impact related effects on Earth include ores of iron, uranium, gold, copper, and nickel. It is estimated that the value of materials mined from impact structures is five billion dollars/year just for North America. The eventual usefulness of impact craters depends on several factors, especially the nature of the materials that were impacted and when the materials were affected. In some cases, the deposits were already in place and the impact brought them to the surface. These are called "progenetic economic deposits." Others were created during the actual impact. The great energy involved caused melting. Useful minerals formed as a result of this energy are classified as "syngenetic deposits." The third type, called "epigenetic deposits," is caused by the creation of a basin from the impact. Many of the minerals that our modern lives depend on are associated with impacts in the past. The Vredeford Dome in the center of the Witwatersrand Basin is the largest goldfield in the world, which has supplied about 40% of all the gold ever mined in an impact structure (though the gold did not come from the bolide). The asteroid that struck the region was wide. The Sudbury Basin was caused by an impacting body over in diameter. This basin is famous for its deposits of nickel, copper, and platinum group elements. An impact was involved in making the Carswell structure in Saskatchewan, Canada; it contains uranium deposits.
Hydrocarbons are common around impact structures. Fifty percent of impact structures in North America in hydrocarbon-bearing sedimentary basins contain oil/gas fields.
Lists of craters
Impact craters on Earth
On Earth, the recognition of impact craters is a branch of geology, and is related to planetary geology in the study of other worlds. Out of many proposed craters, relatively few are confirmed. The following twenty are a sample of articles of confirmed and well-documented impact sites.
See the Earth Impact Database, a website concerned with 190 () scientifically confirmed impact craters on Earth.
Some extraterrestrial craters
Caloris Basin (Mercury)
Hellas Basin (Mars)
Herschel crater (Mimas)
Mare Orientale (Moon)
Petrarch crater (Mercury)
South Pole – Aitken basin (Moon)
Largest named craters in the Solar System
North Polar Basin/Borealis Basin (disputed) – Mars – Diameter: 10,600 km
South Pole-Aitken basin – Moon – Diameter: 2,500 km
Hellas Basin – Mars – Diameter: 2,100 km
Caloris Basin – Mercury – Diameter: 1,550 km
Sputnik Planitia – Pluto – Diameter: 1,300 km
Imbrium Basin – Moon – Diameter: 1,100 km
Isidis Planitia – Mars – Diameter: 1,100 km
Mare Tranquilitatis – Moon – Diameter: 870 km
Argyre Planitia – Mars – Diameter: 800 km
Rembrandt – Mercury – Diameter: 715 km
Serenitatis Basin – Moon – Diameter: 700 km
Mare Nubium – Moon – Diameter: 700 km
Beethoven – Mercury – Diameter: 625 km
Valhalla – Callisto – Diameter: 600 km, with rings to 4,000 km diameter
Hertzsprung – Moon – Diameter: 590 km
Turgis – Iapetus – Diameter: 580 km
Apollo – Moon – Diameter: 540 km
Engelier – Iapetus – Diameter: 504 km
Mamaldi – Rhea – Diameter: 480 km
Huygens – Mars – Diameter: 470 km
Schiaparelli – Mars – Diameter: 470 km
Rheasilvia – 4 Vesta – Diameter: 460 km
Gerin – Iapetus – Diameter: 445 km
Odysseus – Tethys – Diameter: 445 km
Korolev – Moon – Diameter: 430 km
Falsaron – Iapetus – Diameter: 424 km
Dostoevskij – Mercury – Diameter: 400 km
Menrva – Titan – Diameter: 392 km
Tolstoj – Mercury – Diameter: 390 km
Goethe – Mercury – Diameter: 380 km
Malprimis – Iapetus – Diameter: 377 km
Tirawa – Rhea – Diameter: 360 km
Orientale Basin – Moon – Diameter: 350 km, with rings to 930 km diameter
Evander – Dione – Diameter: 350 km
Epigeus – Ganymede – Diameter: 343 km
Gertrude – Titania – Diameter: 326 km
Telemus – Tethys – Diameter: 320 km
Asgard – Callisto – Diameter: 300 km, with rings to 1,400 km diameter
Vredefort impact structure – Earth – Diameter: 300 km
Burney – Pluto – Diameter: 296 km
There are approximately twelve more impact craters/basins larger than 300 km on the Moon, five on Mercury, and four on Mars. Large basins, some unnamed but mostly smaller than 300 km, can also be found on Saturn's moons Dione, Rhea and Iapetus.
See also
, 1998 book from Lunar and Planetary Institute – comprehensive reference on impact crater science
References
Bibliography
Further reading
External links
The Geological Survey of Canada Crater database, 172 impact structures
Aerial Explorations of Terrestrial Meteorite Craters
Impact Meteor Crater Viewer Google Maps Page with Locations of Meteor Craters around the world
Solarviews: Terrestrial Impact Craters
Lunar and Planetary Institute slidshow: contains pictures
Earth Impact Effects Program Estimates crater size and other effects of a specified body colliding with Earth.
Crater
Geology of the Moon
Depressions (geology)
Articles containing video clips
Planetary geology | Impact crater | [
"Astronomy"
] | 5,014 | [
"Astronomical objects",
"Impact craters"
] |
6,420 | https://en.wikipedia.org/wiki/Corona%20Borealis | Corona Borealis is a small constellation in the Northern Celestial Hemisphere. It is one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and remains one of the 88 modern constellations. Its brightest stars form a semicircular arc. Its Latin name, inspired by its shape, means "northern crown". In classical mythology Corona Borealis generally represented the crown given by the god Dionysus to the Cretan princess Ariadne and set by her in the heavens. Other cultures likened the pattern to a circle of elders, an eagle's nest, a bear's den or a smokehole. Ptolemy also listed a southern counterpart, Corona Australis, with a similar pattern.
The brightest star is the magnitude 2.2 Alpha Coronae Borealis. The yellow supergiant R Coronae Borealis is the prototype of a rare class of giant stars—the R Coronae Borealis variables—that are extremely hydrogen deficient, and thought to result from the merger of two white dwarfs. T Coronae Borealis, also known as the Blaze Star, is another unusual type of variable star known as a recurrent nova. Normally of magnitude 10, it last flared up to magnitude 2 in 1946, and is predicted to do the same in 2025. ADS 9731 and Sigma Coronae Borealis are multiple star systems with six and five components respectively. Five stars in the constellation host Jupiter-sized exoplanets. Abell 2065 is a highly concentrated galaxy cluster one billion light-years from the Solar System containing more than 400 members, and is itself part of the larger Corona Borealis Supercluster.
Characteristics
Covering 179 square degrees and hence 0.433% of the sky, Corona Borealis ranks 73rd of the IAU designated constellations by area. Its position in the Northern Celestial Hemisphere means that the whole constellation is visible to observers north of 50°S. It is bordered by Boötes to the north and west, Serpens Caput to the south, and Hercules to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "CrB". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of eight segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between 39.71° and 25.54°. It has a counterpart—Corona Australis—in the Southern Celestial Hemisphere.
Features
Stars
The seven stars that make up the constellation's distinctive crown-shaped pattern are all 4th-magnitude stars except for the brightest of them, Alpha Coronae Borealis. The other six stars are Theta, Beta, Gamma, Delta, Epsilon and Iota Coronae Borealis. The German cartographer Johann Bayer gave twenty stars in Corona Borealis Bayer designations from Alpha to Upsilon in his 1603 star atlas Uranometria. Zeta Coronae Borealis was noted to be a double star by later astronomers and its components designated Zeta1 and Zeta2. John Flamsteed did likewise with Nu Coronae Borealis; classed by Bayer as a single star, it was noted to be two close stars by Flamsteed. He named them 20 and 21 Coronae Borealis in his catalogue, alongside the designations Nu1 and Nu2 respectively. Chinese astronomers deemed nine stars to make up the asterism, adding Pi and Rho Coronae Borealis. Within the constellation's borders, there are 37 stars brighter than or equal to apparent magnitude 6.5.
Alpha Coronae Borealis (officially named Alphecca by the IAU, but sometimes also known as Gemma) appears as a blue-white star of magnitude 2.2. In fact, it is an Algol-type eclipsing binary that varies by 0.1 magnitude with a period of 17.4 days. The primary is a white main-sequence star of spectral type A0V that is 2.91 times the mass of the Sun () and 57 times as luminous (), and is surrounded by a debris disk out to a radius of around 60 astronomical units (AU). The secondary companion is a yellow main-sequence star of spectral type G5V that is a little smaller (0.9 times) the diameter of the Sun. Lying 75±0.5 light-years from Earth, Alphecca is believed to be a member of the Ursa Major Moving Group of stars that have a common motion through space.
Located 112±3 light-years away, Beta Coronae Borealis or Nusakan is a spectroscopic binary system whose two components are separated by 10 AU and orbit each other every 10.5 years. The brighter component is a rapidly oscillating Ap star, pulsating with a period of 16.2 minutes. Of spectral type A5V with a surface temperature of around 7980 K, it has around , 2.6 solar radii (), and . The smaller star is of spectral type F2V with a surface temperature of around 6750 K, and has around , , and between 4 and . Near Nusakan is Theta Coronae Borealis, a binary system that shines with a combined magnitude of 4.13 located 380±20 light-years distant. The brighter component, Theta Coronae Borealis A, is a blue-white star that spins extremely rapidly—at a rate of around 393 km per second. A Be star, it is surrounded by a debris disk.
Flanking Alpha to the east is Gamma Coronae Borealis, yet another binary star system, whose components orbit each other every 92.94 years and are roughly as far apart from each other as the Sun and Neptune. The brighter component has been classed as a Delta Scuti variable star, though this view is not universal. The components are main sequence stars of spectral types B9V and A3V. Located 170±2 light-years away, 4.06-magnitude Delta Coronae Borealis is a yellow giant star of spectral type G3.5III that is around and has swollen to . It has a surface temperature of 5180 K. For most of its existence, Delta Coronae Borealis was a blue-white main-sequence star of spectral type B before it ran out of hydrogen fuel in its core. Its luminosity and spectrum suggest it has just crossed the Hertzsprung gap, having finished burning core hydrogen and just begun burning hydrogen in a shell that surrounds the core.
Zeta Coronae Borealis is a double star with two blue-white components 6.3 arcseconds apart that can be readily separated at 100x magnification. The primary is of magnitude 5.1 and the secondary is of magnitude 6.0. Nu Coronae Borealis is an optical double, whose components are a similar distance from Earth but have different radial velocities, hence are assumed to be unrelated. The primary, Nu1 Coronae Borealis, is a red giant of spectral type M2III and magnitude 5.2, lying 640±30 light-years distant, and the secondary, Nu2 Coronae Borealis, is an orange-hued giant star of spectral type K5III and magnitude 5.4, estimated to be 590±30 light-years away. Sigma Coronae Borealis, on the other hand, is a true multiple star system divisible by small amateur telescopes. It is actually a complex system composed of two stars around as massive as the Sun that orbit each other every 1.14 days, orbited by a third Sun-like star every 726 years. The fourth and fifth components are a binary red dwarf system that is 14,000 AU distant from the other three stars. ADS 9731 is an even rarer multiple system in the constellation, composed of six stars, two of which are spectroscopic binaries.
Corona Borealis is home to two remarkable variable stars. T Coronae Borealis is a cataclysmic variable star also known as the Blaze Star. Normally placid around magnitude 10—it has a minimum of 10.2 and maximum of 9.9—it brightens to magnitude 2 in a period of hours, caused by a nuclear chain reaction and the subsequent explosion. T Coronae Borealis is one of a handful of stars called recurrent novae, which include T Pyxidis and U Scorpii. An outburst of T Coronae Borealis was first recorded in 1866; its second recorded outburst was in February 1946. T Coronae Borealis started dimming in March 2023 and it is known that before it goes nova it dims for about a year; for this reason it is expected to go nova at any time between March and September, 2024. T Coronae Borealis is a binary star with a red-hued giant primary and a white dwarf secondary, the two stars orbiting each other over a period of approximately 8 months. R Coronae Borealis is a yellow-hued variable supergiant star, over 7000 light-years from Earth, and prototype of a class of stars known as R Coronae Borealis variables. Normally of magnitude 6, its brightness periodically drops as low as magnitude 15 and then slowly increases over the next several months. These declines in magnitude come about as dust that has been ejected from the star obscures it. Direct imaging with the Hubble Space Telescope shows extensive dust clouds out to a radius of around 2000 AU from the star, corresponding with a stream of fine dust (composed of grains 5 nm in diameter) associated with the star's stellar wind and coarser dust (composed of grains with a diameter of around 0.14 μm) ejected periodically.
There are several other variables of reasonable brightness for amateur astronomer to observe, including three Mira-type long period variables: S Coronae Borealis ranges between magnitudes 5.8 and 14.1 over a period of 360 days. Located around 1946 light-years distant, it shines with a luminosity 16,643 times that of the Sun and has a surface temperature of 3033 K. One of the reddest stars in the sky, V Coronae Borealis is a cool star with a surface temperature of 2877 K that shines with a luminosity 102,831 times that of the Sun and is a remote 8810 light-years distant from Earth. Varying between magnitudes 6.9 and 12.6 over a period of 357 days, it is located near the junction of the border of Corona Borealis with Hercules and Bootes. Located 1.5° northeast of Tau Coronae Borealis, W Coronae Borealis ranges between magnitudes 7.8 and 14.3 over a period of 238 days. Another red giant, RR Coronae Borealis is a M3-type semiregular variable star that varies between magnitudes 7.3 and 8.2 over 60.8 days. RS Coronae Borealis is yet another semiregular variable red giant, which ranges between magnitudes 8.7 to 11.6 over 332 days. It is unusual in that it is a red star with a high proper motion (greater than 50 milliarcseconds a year). Meanwhile, U Coronae Borealis is an Algol-type eclipsing binary star system whose magnitude varies between 7.66 and 8.79 over a period of 3.45 days
TY Coronae Borealis is a pulsating white dwarf (of ZZ Ceti) type, which is around 70% as massive as the Sun, yet has only 1.1% of its diameter. Discovered in 1990, UW Coronae Borealis is a low-mass X-ray binary system composed of a star less massive than the Sun and a neutron star surrounded by an accretion disk that draws material from the companion star. It varies in brightness in an unusually complex manner: the two stars orbit each other every 111 minutes, yet there is another cycle of 112.6 minutes, which corresponds to the orbit of the disk around the degenerate star. The beat period of 5.5 days indicates the time the accretion disk—which is asymmetrical—takes to precess around the star.
Extrasolar planetary systems
Extrasolar planets have been confirmed in five star systems, four of which were found by the radial velocity method. The spectrum of Epsilon Coronae Borealis was analysed for seven years from 2005 to 2012, revealing a planet around 6.7 times as massive as Jupiter () orbiting every 418 days at an average distance of around 1.3 AU. Epsilon itself is a orange giant of spectral type K2III that has swollen to and . Kappa Coronae Borealis is a spectral type K1IV orange subgiant nearly twice as massive as the Sun; around it lies a dust debris disk, and one planet with a period of 3.4 years. This planet's mass is estimated at . The dimensions of the debris disk indicate it is likely there is a second substellar companion. Omicron Coronae Borealis is a K-type clump giant with one confirmed planet with a mass of that orbits every 187 days—one of the two least massive planets known around clump giants. HD 145457 is an orange giant of spectral type K0III found to have one planet of . Discovered by the Doppler method in 2010, it takes 176 days to complete an orbit. XO-1 is a magnitude 11 yellow main-sequence star located approximately light-years away, of spectral type G1V with a mass and radius similar to the Sun. In 2006 the hot Jupiter exoplanet XO-1b was discovered orbiting XO-1 by the transit method using the XO Telescope. Roughly the size of Jupiter, it completes an orbit around its star every three days.
The discovery of a Jupiter-sized planetary companion was announced in 1997 via analysis of the radial velocity of Rho Coronae Borealis, a yellow main sequence star and Solar analog of spectral type G0V, around 57 light-years distant from Earth. More accurate measurement of data from the Hipparcos satellite subsequently showed it instead to be a low-mass star somewhere between 100 and 200 times the mass of Jupiter. Possible stable planetary orbits in the habitable zone were calculated for the binary star Eta Coronae Borealis, which is composed of two stars—yellow main sequence stars of spectral type G1V and G3V respectively—similar in mass and spectrum to the Sun. No planet has been found, but a brown dwarf companion about 63 times as massive as Jupiter with a spectral type of L8 was discovered at a distance of 3640 AU from the pair in 2001.
Deep-sky objects
Corona Borealis contains few galaxies observable with amateur telescopes. NGC 6085 and 6086 are a faint spiral and elliptical galaxy respectively close enough to each other to be seen in the same visual field through a telescope. Abell 2142 is a huge (six million light-year diameter), X-ray luminous galaxy cluster that is the result of an ongoing merger between two galaxy clusters. It has a redshift of 0.0909 (meaning it is moving away from us at 27,250 km/s) and a visual magnitude of 16.0. It is about 1.2 billion light-years away. Another galaxy cluster in the constellation, RX J1532.9+3021, is approximately 3.9 billion light-years from Earth. At the cluster's center is a large elliptical galaxy containing one of the most massive and most powerful supermassive black holes yet discovered. Abell 2065 is a highly concentrated galaxy cluster containing more than 400 members, the brightest of which are 16th magnitude; the cluster is more than one billion light-years from Earth. On a larger scale still, Abell 2065, along with Abell 2061, Abell 2067, Abell 2079, Abell 2089, and Abell 2092, make up the Corona Borealis Supercluster. Another galaxy cluster, Abell 2162, is a member of the Hercules Superclusters.
Mythology
In Greek mythology, Corona Borealis was linked to the legend of Theseus and the minotaur. It was generally considered to represent a crown given by Dionysus to Ariadne, the daughter of Minos of Crete, after she had been abandoned by the Athenian prince Theseus. When she wore the crown at her marriage to Dionysus, he placed it in the heavens to commemorate their wedding. An alternative version has the besotted Dionysus give the crown to Ariadne, who in turn gives it to Theseus after he arrives in Crete to kill the minotaur that the Cretans have demanded tribute from Athens to feed. The hero uses the crown's light to escape the labyrinth after disposing of the creature, and Dionysus later sets it in the heavens. , attributed to Hyginus, linked it to a crown or wreath worn by Bacchus (Dionysus) to disguise his appearance when first approaching Mount Olympus and revealing himself to the gods, having been previously hidden as yet another child of Jupiter's trysts with a mortal, in this case Semele. Its proximity to the constellations Hercules (which reports was once attributed to Theseus, among others) and Lyra (Theseus' lyre in one account), could indicate that the three constellations were invented as a group. Corona Borealis was one of the 48 constellations mentioned in the Almagest of classical astronomer Ptolemy.
In Mesopotamia, Corona Borealis was associated with the goddess Nanaya.
In Welsh mythology, it was called Caer Arianrhod, "the Castle of the Silver Circle", and was the heavenly abode of the Lady Arianrhod. To the ancient Balts, Corona Borealis was known as Darželis, the "flower garden".
The Arabs called the constellation Alphecca (a name later given to Alpha Coronae Borealis), which means "separated" or "broken up" ( ), a reference to the resemblance of the stars of Corona Borealis to a loose string of jewels. This was also interpreted as a broken dish. Among the Bedouins, the constellation was known as (), or "the dish/bowl of the poor people".
The Skidi people of Native Americans saw the stars of Corona Borealis representing a council of stars whose chief was Polaris. The constellation also symbolised the smokehole over a fireplace, which conveyed their messages to the gods, as well as how chiefs should come together to consider matters of importance. The Shawnee people saw the stars as the Heavenly Sisters, who descended from the sky every night to dance on earth. Alphecca signifies the youngest and most comely sister, who was seized by a hunter who transformed into a field mouse to get close to her. They married though she later returned to the sky, with her heartbroken husband and son following later. The Mi'kmaq of eastern Canada saw Corona Borealis as Mskegwǒm, the den of the celestial bear (Alpha, Beta, Gamma and Delta Ursae Majoris).
Polynesian peoples often recognized Corona Borealis; the people of the Tuamotus named it Na Kaua-ki-tokerau and probably Te Hetu. The constellation was likely called Kaua-mea in Hawaii, Rangawhenua in New Zealand, and Te Wale-o-Awitu in the Cook Islands atoll of Pukapuka. Its name in Tonga was uncertain; it was either called Ao-o-Uvea or Kau-kupenga.
In Australian Aboriginal astronomy, the constellation is called womera ("the boomerang") due to the shape of the stars. The Wailwun people of northwestern New South Wales saw Corona Borealis as mullion wollai "eagle's nest", with Altair and Vega—each called mullion—the pair of eagles accompanying it. The Wardaman people of northern Australia held the constellation to be a gathering point for Men's Law, Women's Law and Law of both sexes come together and consider matters of existence.
Later references
Corona Borealis was renamed Corona Firmiana in honour of the Archbishop of Salzburg in the 1730 Atlas Mercurii Philosophicii Firmamentum Firminianum Descriptionem by Corbinianus Thomas, but this was not taken up by subsequent cartographers. The constellation was featured as a main plot ingredient in the short story "Hypnos" by H. P. Lovecraft, published in 1923; it is the object of fear of one of the protagonists in the short story. Finnish band Cadacross released an album titled Corona Borealis in 2002.
See also
Corona Borealis (Chinese astronomy)
Notes
References
Cited texts
External links
Warburg Institute Iconographic Database (ca 160 medieval and early modern images of Corona Borealis)
Ariadne
Constellations listed by Ptolemy
Constellations
Mythological clothing
Mythology of Dionysus
Northern constellations
Objects in Greek mythology | Corona Borealis | [
"Astronomy"
] | 4,354 | [
"Constellations listed by Ptolemy",
"Constellations",
"Northern constellations",
"Corona Borealis",
"Sky regions"
] |
6,421 | https://en.wikipedia.org/wiki/Cygnus%20%28constellation%29 | Cygnus is a northern constellation on the plane of the Milky Way, deriving its name from the Latinized Greek word for swan. Cygnus is one of the most recognizable constellations of the northern summer and autumn, and it features a prominent asterism known as the Northern Cross (in contrast to the Southern Cross). Cygnus was among the 48 constellations listed by the 2nd century astronomer Ptolemy, and it remains one of the 88 modern constellations.
Cygnus contains Deneb (ذنب, translit. ḏanab, tail)one of the brightest stars in the night sky and the most distant first-magnitude staras its "tail star" and one corner of the Summer Triangle the constellation forming an east pointing altitude of the triangle. It also has some notable X-ray sources and the giant stellar association of Cygnus OB2. One of the stars of this association, NML Cygni, is one of the largest stars currently known. The constellation is also home to Cygnus X-1, a distant X-ray binary containing a supergiant and unseen massive companion that was the first object widely held to be a black hole.
Many star systems in Cygnus have known planets as a result of the Kepler Mission observing one patch of the sky, an area around Cygnus.
Most of the east has part of the Hercules–Corona Borealis Great Wall in the deep sky, a giant galaxy filament that is the largest known structure in the observable universe, covering most of the northern sky.
History and mythology
In Eastern and World astronomy
In Polynesia, Cygnus was often recognized as a separate constellation. In Tonga it was called Tuula-lupe, and in the Tuamotus it was called Fanui-tai. In New Zealand it was called Mara-tea, in the Society Islands it was called Pirae-tea or Taurua-i-te-haapa-raa-manu, and in the Tuamotus it was called Fanui-raro. Beta Cygni was named in New Zealand; it was likely called Whetu-kaupo. Gamma Cygni was called Fanui-runga in the Tuamotus.
Deneb was also often a given name, in the Islamic world of astronomy. The name Deneb comes from the Arabic name dhaneb, meaning "tail", from the phrase Dhanab ad-Dajājah, which means "the tail of the hen".
In Western astronomy
In Greek mythology, Cygnus has been identified with several different legendary swans. Zeus disguised himself as a swan to seduce Leda, Spartan king Tyndareus's wife, who gave birth to the Gemini, Helen of Troy, and Clytemnestra; Orpheus was transformed into a swan after his murder, and was said to have been placed in the sky next to his lyre (Lyra); and a man named Cygnus (Greek for swan) was transformed into his namesake.
Later Romans also associated this constellation with the tragic story of Phaethon, the son of Helios the sun god, who demanded to ride his father's sun chariot for a day. Phaethon, however, was unable to control the reins, forcing Zeus to destroy the chariot (and Phaethon) with a thunderbolt, causing it to plummet to the earth into the river Eridanus. According to the myth, Phaethon's close friend or lover, Cygnus of Liguria, grieved bitterly and spent many days diving into the river to collect Phaethon's bones to give him a proper burial. The gods were so touched by Cygnus's devotion that they turned him into a swan and placed him among the stars.
In Ovid's Metamorphoses, there are three people named Cygnus, all of whom are transformed into swans. Alongside Cygnus, noted above, he mentions a boy from Aetolia who throws himself off a cliff when his companion Phyllius refuses to give him a tamed bull that he demands, but he is transformed into a swan and flies away. He also mentions a son of Poseidon, an invulnerable warrior in the Trojan War who is eventually killed by Achilles, but Poseidon saves him by transforming him into a swan.
Together with other avian constellations near the summer solstice, Vultur cadens and Aquila, Cygnus may be a significant part of the origin of the myth of the Stymphalian Birds, one of The Twelve Labours of Hercules.
Characteristics
A very large constellation, Cygnus is bordered by Cepheus to the north and east, Draco to the north and west, Lyra to the west, Vulpecula to the south, Pegasus to the southeast and Lacerta to the east. The three-letter abbreviation for the constellation, as adopted by the IAU in 1922, is "Cyg". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined as a polygon of 28 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between 27.73° and 61.36°. Covering 804 square degrees and around 1.9% of the night sky, Cygnus ranks 16th of the 88 constellations in size.
Cygnus culminates at midnight on 29 June, and is most visible in the evening from the early summer to mid-autumn in the Northern Hemisphere.
Normally, Cygnus is depicted with Delta and Epsilon Cygni as its wings. Deneb, the brightest in the constellation is at its tail, and Albireo as the tip of its beak.
There are several asterisms in Cygnus. In the 17th-century German celestial cartographer Johann Bayer's star atlas the Uranometria, Alpha, Beta and Gamma Cygni form the pole of a cross, while Delta and Epsilon form the cross beam. The nova P Cygni was then considered to be the body of Christ.
Features
There is an abundance of deep-sky objects, with many open clusters, nebulae of various types and supernova remnants found in Cygnus due to its position on the Milky Way.
Its molecular clouds form the Cygnus Rift dark nebula constellation, comprising one end of the Great Rift along the Milky Way's galactic plane. The rift begins around the Northern Coalsack, and partially obscures the larger Cygnus molecular cloud complex behind it, which the North America Nebula is part of.
Stars
Bayer catalogued many stars in the constellation, giving them the Bayer designations from Alpha to Omega and then using lowercase Roman letters to g. John Flamsteed added the Roman letters h, i, k, l and m (these stars were considered informes by Bayer as they lay outside the asterism of Cygnus), but were dropped by Francis Baily.
There are several bright stars in Cygnus. α Cygni, called Deneb, is the brightest star in Cygnus. It is a white supergiant star of spectral type A2Iae that varies between magnitudes 1.21 and 1.29, one of the largest and most luminous A-class stars known. It is located about 2600 light-years away. Its traditional name means "tail" and refers to its position in the constellation. Albireo, designated β Cygni, is a celebrated binary star among amateur astronomers for its contrasting hues. The primary is an orange-hued giant star of magnitude 3.1 and the secondary is a blue-green hued star of magnitude 5.1. The system is 430 light-years away and is visible in large binoculars and all amateur telescopes. γ Cygni, traditionally named Sadr, is a yellow-tinged supergiant star of magnitude 2.2, 1800 light-years away. Its traditional name means "breast" and refers to its position in the constellation. δ Cygni (the proper name is Fawaris) is another bright binary star in Cygnus, 166 light-years with a period of 800 years. The primary is a blue-white hued giant star of magnitude 2.9, and the secondary is a star of magnitude 6.6. The two components are visible in a medium-sized amateur telescope. The fifth star in Cygnus above magnitude 3 is Aljanah, designated ε Cygni. It is an orange-hued giant star of magnitude 2.5, 72 light-years from Earth.
There are several other dimmer double and binary stars in Cygnus. μ Cygni is a binary star with an optical tertiary component. The binary system has a period of 790 years and is 73 light-years from Earth. The primary and secondary, both white stars, are of magnitude 4.8 and 6.2, respectively. The unrelated tertiary component is of magnitude 6.9. Though the tertiary component is visible in binoculars, the primary and secondary currently require a medium-sized amateur telescope to split, as they will through the year 2020. The two stars will be closest between 2043 and 2050, when they will require a telescope with larger aperture to split. The stars 30 and 31 Cygni form a contrasting double star similar to the brighter Albireo. The two are visible in binoculars. The primary, 31 Cygni, is an orange-hued star of magnitude 3.8, 1400 light-years from Earth. The secondary, 30 Cygni, appears blue-green. It is of spectral type A5IIIn and magnitude 4.83, and is around 610 light-years from Earth. 31 Cygni itself is a binary star; the tertiary component is a blue star of magnitude 7.0. ψ Cygni is a binary star visible in small amateur telescopes, with two white components. The primary is of magnitude 5.0 and the secondary is of magnitude 7.5. 61 Cygni is a binary star visible in large binoculars or a small amateur telescope. It is 11.4 light-years from Earth and has a period of 750 years. Both components are orange-hued dwarf (main sequence) stars; the primary is of magnitude 5.2 and the secondary is of magnitude 6.1. 61 Cygni is significant because Friedrich Wilhelm Bessel determined its parallax in 1838, the first star to have a known parallax.
Located near η Cygni is the X-ray source Cygnus X-1, which is now thought to be caused by a black hole accreting matter in a binary star system. This was the first X-ray source widely believed to be a black hole. It is located approximately 2.2 kiloparsecs from the Sun. There is also supergiant variable star in the system which is known as HDE 226868.
Cygnus also contains several other noteworthy X-ray sources. Cygnus X-3 is a microquasar containing a Wolf–Rayet star in orbit around a very compact object, with a period of only 4.8 hours. The system is one of the most intrinsically luminous X-ray sources observed. The system undergoes periodic outbursts of unknown nature, and during one such outburst, the system was found to be emitting muons, likely caused by neutrinos. While the compact object is thought to be a neutron star or possibly a black hole, it is possible that the object is instead a more exotic stellar remnant, possibly the first discovered quark star, hypothesized due to its production of cosmic rays that cannot be explained if the object is a normal neutron star. The system also emits cosmic rays and gamma rays, and has helped shed insight on to the formation of such rays. Cygnus X-2 is another X-ray binary, containing an A-type giant in orbit around a neutron star with a 9.8-day period. The system is interesting due to the rather small mass of the companion star, as most millisecond pulsars have much more massive companions. Another black hole in Cygnus is V404 Cygni, which consists of a K-type star orbiting around a black hole of around 12 solar masses. The black hole, similar to that of Cygnus X-3, has been hypothesized to be a quark star. 4U 2129+ 47 is another X-ray binary containing a neutron star which undergoes outbursts, as is EXO 2030+ 375.
Cygnus is also home to several variable stars. SS Cygni is a dwarf nova which undergoes outbursts every 7–8 weeks. The system's total magnitude varies from 12th magnitude at its dimmest to 8th magnitude at its brightest. The two objects in the system are incredibly close together, with an orbital period of less than 0.28 days. χ Cygni is a red giant and the second-brightest Mira variable star at its maximum. It ranges between magnitudes 3.3 and 14.2, and spectral types S6,2e to S10,4e (MSe) over a period of 408 days; it has a diameter of 300 solar diameters and is 350 light-years from Earth. P Cygni is a luminous blue variable that brightened suddenly to 3rd magnitude in 1600 AD. Since 1715, the star has been of 5th magnitude, despite being more than 5000 light-years from Earth. The star's spectrum is unusual in that it contains very strong emission lines resulting from surrounding nebulosity. W Cygni is a semi-regular variable red giant star, 618 light-years from Earth.It has a maximum magnitude of 5.10 and a minimum magnitude 6.83; its period of 131 days. It is a red giant ranging between spectral types M4e-M6e(Tc:)III, NML Cygni is a red hypergiant semi-regular variable star located at 5,300 light-years away from Earth. It is one of largest stars currently known in the galaxy with a radius exceeding 1,000 solar radii. Its magnitude is around 16.6, its period is about 940 days.
The star KIC 8462852 (Tabby's Star) has received widespread press coverage because of unusual light fluctuations.
Exoplanets
Cygnus is one of the constellations that the Kepler satellite surveyed in its search for exoplanets, and as a result, there are about a hundred stars in Cygnus with known planets, the most of any constellation. One of the most notable systems is the Kepler-11 system, containing six transiting planets, all within a plane of approximately one degree. It was the system with six exoplanets to be discovered. With a spectral type of G6V, the star is somewhat cooler than the Sun. The planets are very close to the star; all but the last planet are closer to Kepler-11 than Mercury is to the Sun, and all the planets are more massive than Earth, and have low densities. The planets have low densities. The naked-eye star 16 Cygni, a triple star approximately 70 light-years from Earth composed two Sun-like stars and a red dwarf, contains a planet orbiting one of the sun-like stars, found due to variations in the star's radial velocity. Gliese 777, another naked-eye multiple star system containing a yellow star and a red dwarf, also contains a planet. The planet is somewhat similar to Jupiter, but with slightly more mass and a more eccentric orbit. The Kepler-22 system is also notable for having the most Earth-like exoplanet when it was discovered in 2011.
Star clusters
The rich background of stars of Cygnus can make it difficult to make out open cluster.
M39 (NGC 7092) is an open cluster 950 light-years from Earth that are visible to the unaided eye under dark skies. It is loose, with about 30 stars arranged over a wide area; their conformation appears triangular. The brightest stars of M39 are of the 7th magnitude. Another open cluster in Cygnus is NGC 6910, also called the Rocking Horse Cluster, possessing 16 stars with a diameter of 5 arcminutes visible in a small amateur instrument; it is of magnitude 7.4. The brightest of these are two gold-hued stars, which represent the bottom of the toy it is named for. A larger amateur instrument reveals 8 more stars, nebulosity to the east and west of the cluster, and a diameter of 9 arcminutes. The nebulosity in this region is part of the Gamma Cygni Nebula. The other stars, approximately 3700 light-years from Earth, are mostly blue-white and very hot.
Other open clusters in Cygnus include Dolidze 9, Collinder 421, Dolidze 11, and Berkeley 90. Dolidze 9, 2800 light-years from Earth and relatively young at 20 million light-years old, is a faint open cluster with up to 22 stars visible in small and medium-sized amateur telescopes. Nebulosity is visible to the north and east of the cluster, which is 7 arcminutes in diameter. The brightest star appears in the eastern part of the cluster and is of the 7th magnitude; another bright star has a yellow hue. Dolidze 11 is an open cluster 400 million years old, farthest away of the three at 3700 light-years. More than 10 stars are visible in an amateur instrument in this cluster, of similar size to Dolidze 9 at 7 arcminutes in diameter, whose brightest star is of magnitude 7.5. It, too, has nebulosity in the east. Collinder 421 is a particularly old open cluster at an age of approximately 1 billion years; it is of magnitude 10.1. 3100 light-years from Earth, more than 30 stars are visible in a diameter of 8 arcseconds. The prominent star in the north of the cluster has a golden color, whereas the stars in the south of the cluster appear orange. Collinder 421 appears to be embedded in nebulosity, which extends past the cluster's borders to its west. Berkeley 90 is a smaller open cluster, with a diameter of 5 arcminutes. More than 16 members appear in an amateur telescope.
Molecular clouds
NGC 6826, the Blinking Planetary Nebula, is a planetary nebula with a magnitude of 8.5, 3200 light-years from Earth. It appears to "blink" in the eyepiece of a telescope because its central star is unusually bright (10th magnitude). When an observer focuses on the star, the nebula appears to fade away. Less than one degree from the Blinking Planetary is the double star 16 Cygni.
The North America Nebula (NGC 7000) is one of the most well-known nebulae in Cygnus, because it is visible to the unaided eye under dark skies, as a bright patch in the Milky Way. However, its characteristic shape is only visible in long-exposure photographs – it is difficult to observe in telescopes because of its low surface brightness. It has low surface brightness because it is so large; at its widest, the North America Nebula is 2 degrees across. Illuminated by a hot embedded star of magnitude 6, NGC 7000 is 1500 light-years from Earth.
To the south of Epsilon Cygni is the Veil Nebula (NGC 6960, 6979, 6992, and 6995), a 5,000-year-old supernova remnant covering approximately 3 degrees of the sky - it is over 50 light-years long. Because of its appearance, it is also called the Cygnus Loop. The Loop is only visible in long-exposure astrophotographs. However, the brightest portion, NGC 6992, is faintly visible in binoculars, and a dimmer portion, NGC 6960, is visible in wide-angle telescopes.
The DR 6 cluster is also nicknamed the "Galactic Ghoul" because of the nebula's resemblance to a human face;
The Gamma Cygni Nebula (IC 1318) includes both bright and dark nebulae in an area of over 4 degrees. DWB 87 is another of the many bright emission nebulae in Cygnus, 7.8 by 4.3 arcminutes. It is in the Gamma Cygni area. Two other emission nebulae include Sharpless 2-112 and Sharpless 2-115. When viewed in an amateur telescope, Sharpless 2–112 appears to be in a teardrop shape. More of the nebula's eastern portion is visible with an O III (doubly ionized oxygen) filter. There is an orange star of magnitude 10 nearby and a star of magnitude 9 near the nebula's northwest edge. Further to the northwest, there is a dark rift and another bright patch. The whole nebula measures 15 arcminutes in diameter. Sharpless 2–115 is another emission nebula with a complex pattern of light and dark patches. Two pairs of stars appear in the nebula; it is larger near the southwestern pair. The open cluster Berkeley 90 is embedded in this large nebula, which measures 30 by 20 arcminutes.
Also of note is the Crescent Nebula (NGC 6888), located between Gamma and Eta Cygni, which was formed by the Wolf–Rayet star HD 192163.
In recent years, amateur astronomers have made some notable Cygnus discoveries. The "Soap bubble nebula" (PN G75.5+1.7), near the Crescent nebula, was discovered on a digital image by Dave Jurasevich in 2007. In 2011, Austrian amateur Matthias Kronberger discovered a planetary nebula (Kronberger 61, now nicknamed "The Soccer Ball") on old survey photos, confirmed recently in images by the Gemini Observatory; both of these are likely too faint to be detected by eye in a small amateur scope.
But a much more obscure and relatively 'tiny' object—one which is readily seen in dark skies by amateur telescopes, under good conditions—is the newly discovered nebula (likely reflection type) associated with the star 4 Cygni (HD 183056): an approximately fan-shaped glowing region of several arcminutes' diameter, to the south and west of the fifth-magnitude star. It was first discovered visually near San Jose, California and publicly reported by amateur astronomer Stephen Waldee in 2007, and was confirmed photographically by Al Howard in 2010. California amateur astronomer Dana Patchick also says he detected it on the Palomar Observatory survey photos in 2005 but had not published it for others to confirm and analyze at the time of Waldee's first official notices and later 2010 paper.
Cygnus X is the largest star-forming region in the solar neighborhood and includes not only some of the brightest and most massive stars known (such as Cygnus OB2-12), but also Cygnus OB2, a massive stellar association classified by some authors as a young globular cluster.
Deep space objects
Cygnus A is the first radio galaxy discovered; at a distance of 730 million light-years from Earth, it is the closest powerful radio galaxy. In the visible spectrum, it appears as an elliptical galaxy in a small cluster. It is classified as an active galaxy because the supermassive black hole at its nucleus is accreting matter, which produces two jets of matter from the poles. The jets' interaction with the interstellar medium creates radio lobes, one source of radio emissions.
Other features
Cygnus is also the apparent source of the WIMP-wind due to the orientation of the solar system's rotation through the galactic halo.
The local Orion-Cygnus Arm and the distant Cygnus Arm are two minor galactic arms named after Cygnus for lying in its background.
See also
Cygnus in Chinese astronomy
Cygnus (spacecraft)
References
Bibliography
Ian Ridpath and Wil Tirion (2007). Stars and Planets Guide, Collins, London. . Princeton University Press, Princeton. .
External links
The Deep Photographic Guide to the Constellations: Cygnus
Northern Cygnus Mosaic Pan and Zoom in on deep sky objects in Cygnus (requires ShockwaveFlash).
The clickable Cygnus
Star Tales – Cygnus
4 Cygni Nebula
Warburg Institute Iconographic Database (medieval and early modern images of Cygnus)
Constellations
Northern constellations
Constellations listed by Ptolemy
Legendary birds | Cygnus (constellation) | [
"Astronomy"
] | 5,061 | [
"Constellations listed by Ptolemy",
"Cygnus (constellation)",
"Constellations",
"Northern constellations",
"Sky regions"
] |
6,423 | https://en.wikipedia.org/wiki/Calorie | The calorie is a unit of energy that originated from the caloric theory of heat. The large calorie, food calorie, dietary calorie, kilocalorie, or kilogram calorie is defined as the amount of heat needed to raise the temperature of one liter of water by one degree Celsius (or one kelvin). The small calorie or gram calorie is defined as the amount of heat needed to cause the same increase in one milliliter of water. Thus, 1 large calorie is equal to 1,000 small calories.
In nutrition and food science, the term calorie and the symbol cal may refer to the large unit or to the small unit in different regions of the world. It is generally used in publications and package labels to express the energy value of foods in per serving or per weight, recommended dietary caloric intake, metabolic rates, etc. Some authors recommend the spelling Calorie and the symbol Cal (both with a capital C) if the large calorie is meant, to avoid confusion; however, this convention is often ignored.
In physics and chemistry, the word calorie and its symbol usually refer to the small unit, the large one being called kilocalorie (kcal). However, the kcal is not officially part of the International System of Units (SI), and is regarded as obsolete, having been replaced in many uses by the SI derived unit of energy, the joule (J), or the kilojoule (kJ) for 1000 joules.
The precise equivalence between calories and joules has varied over the years, but in thermochemistry and nutrition it is now generally assumed that one (small) calorie (thermochemical calorie) is equal to exactly 4.184 J, and therefore one kilocalorie (one large calorie) is 4184 J or 4.184 kJ.
History
The term "calorie" comes . It was first introduced by Nicolas Clément, as a unit of heat energy, in lectures on experimental calorimetry during the years 1819–1824. This was the "large" calorie. The term (written with lowercase "c") entered French and English dictionaries between 1841 and 1867.
The same term was used for the "small" unit by Pierre Antoine Favre (chemist) and Johann T. Silbermann (physicist) in 1852.
In 1879, Marcellin Berthelot distinguished between gram-calorie and kilogram-calorie, and proposed using "Calorie", with capital "C", for the large unit. This usage was adopted by Wilbur Olin Atwater, a professor at Wesleyan University, in 1887, in an influential article on the energy content of food.
The smaller unit was used by U.S. physician Joseph Howard Raymond, in his classic 1894 textbook A Manual of Human Physiology. He proposed calling the "large" unit "kilocalorie", but the term did not catch on until some years later.
The small calorie (cal) was recognized as a unit of the CGS system in 1896, alongside the already-existing CGS unit of energy, the erg (first suggested by Clausius in 1864, under the name ergon, and officially adopted in 1882).
In 1928, there were already serious complaints about the possible confusion arising from the two main definitions of the calorie and whether the notion of using the capital letter to distinguish them was sound.
The joule was the officially adopted SI unit of energy at the ninth General Conference on Weights and Measures in 1948. The calorie was mentioned in the 7th edition of the SI brochure as an example of a non-SI unit.
The alternate spelling is a less-common, non-standard variant.
Definitions
The "small" calorie is broadly defined as the amount of energy needed to increase the temperature of 1 gram of water by 1 °C (or 1 K, which is the same increment, a gradation of one percent of the interval between the melting point and the boiling point of water). The actual amount of energy required to accomplish this temperature increase depends on the atmospheric pressure and the starting temperature; different choices of these parameters have resulted in several different precise definitions of the unit.
The two definitions most common in older literature appear to be the 15 °C calorie and the thermochemical calorie. Until 1948, the latter was defined as 4.1833 international joules; the current standard of 4.184 J was chosen to have the new thermochemical calorie represent the same quantity of energy as before.
Usage
Nutrition
In the United States, in a nutritional context, the "large" unit is used almost exclusively. It is generally written "calorie" with lowercase "c" and symbol "cal", even in government publications. The SI unit kilojoule (kJ) may be used instead, in legal or scientific contexts. Most American nutritionists prefer the unit kilocalorie to the unit kilojoules, whereas most physiologists prefer to use kilojoules. In the majority of other countries, nutritionists prefer the kilojoule to the kilocalorie.
In the European Union, on nutrition facts labels, energy is expressed in both kilojoules and kilocalories, abbreviated as "kJ" and "kcal" respectively.
In China, only kilojoules are given.
Food energy
The unit is most commonly used to express food energy, namely the specific energy (energy per mass) of metabolizing different types of food. For example, fat (triglyceride lipids) contains 9 kilocalories per gram (kcal/g), while carbohydrates (sugar and starch) and protein contain approximately 4 kcal/g. Alcohol in food contains 7 kcal/g. The "large" unit is also used to express recommended nutritional intake or consumption, as in "calories per day".
Dieting is the practice of eating food in a regulated way to decrease, maintain, or increase body weight, or to prevent and treat diseases such as diabetes and obesity. As weight loss depends on reducing caloric intake, different kinds of calorie-reduced diets have been shown to be generally effective.
Chemistry and physics
In other scientific contexts, the term "calorie" and the symbol "cal" almost always refers to the small unit; the "large" unit being generally called "kilocalorie" with symbol "kcal". It is mostly used to express the amount of energy released in a chemical reaction or phase change, typically per mole of substance, as in kilocalories per mole. It is also occasionally used to specify other energy quantities that relate to reaction energy, such as enthalpy of formation and the size of activation barriers. However, it is increasingly being superseded by the SI unit, the joule (J); and metric multiples thereof, such as the kilojoule (kJ).
The lingering use in chemistry is largely due to the fact that the energy released by a reaction in aqueous solution, expressed in kilocalories per mole of reagent, is numerically close to the concentration of the reagent in moles per liter multiplied by the change in the temperature of the solution in kelvins or degrees Celsius. However, this estimate assumes that the volumetric heat capacity of the solution is 1 kcal/(L⋅K), which is not exact even for pure water.
See also
Basal metabolic rate
Caloric theory
Conversion of units of energy
Empty calorie
Food energy
A calorie is a calorie
Nutrition facts label
British thermal unit
Satiety value
References
Units of energy
Heat transfer
Non-SI metric units | Calorie | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,640 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Non-SI metric units",
"Quantity",
"Units of energy",
"Thermodynamics",
"Units of measurement"
] |
6,424 | https://en.wikipedia.org/wiki/Corona%20Australis | Corona Australis is a constellation in the Southern Celestial Hemisphere. Its Latin name means "southern crown", and it is the southern counterpart of Corona Borealis, the northern crown. It is one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations. The Ancient Greeks saw Corona Australis as a wreath rather than a crown and associated it with Sagittarius or Centaurus. Other cultures have likened the pattern to a turtle, ostrich nest, a tent, or even a hut belonging to a rock hyrax.
Although fainter than its northern counterpart, the oval- or horseshoe-shaped pattern of its brighter stars renders it distinctive. Alpha and Beta Coronae Australis are the two brightest stars with an apparent magnitude of around 4.1. Epsilon Coronae Australis is the brightest example of a W Ursae Majoris variable in the southern sky. Lying alongside the Milky Way, Corona Australis contains one of the closest star-forming regions to the Solar System—a dusty dark nebula known as the Corona Australis Molecular Cloud, lying about 430 light years away. Within it are stars at the earliest stages of their lifespan. The variable stars R and TY Coronae Australis light up parts of the nebula, which varies in brightness accordingly.
Name
The name of the constellation was entered as "Corona Australis" when the International Astronomical Union (IAU) established the 88 modern constellations in 1922.
In 1932, the name was instead recorded as "Corona Austrina" when the IAU's commission on notation approved a list of four-letter abbreviations for the constellations.
The four-letter abbreviations were repealed in 1955. The IAU presently uses "Corona Australis" exclusively.
Characteristics
Corona Australis is a small constellation bordered by Sagittarius to the north, Scorpius to the west, Telescopium to the south, and Ara to the southwest. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "CrA". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of four segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −36.77° and −45.52°. Covering 128 square degrees, Corona Australis culminates at midnight around the 30th of June and ranks 80th in area. Only visible at latitudes south of 53° north, Corona Australis cannot be seen from the British Isles as it lies too far south, but it can be seen from southern Europe and readily from the southern United States.
Features
While not a bright constellation, Corona Australis is nonetheless distinctive due to its easily identifiable pattern of stars, which has been described as horseshoe- or oval-shaped. Though it has no stars brighter than 4th magnitude, it still has 21 stars visible to the unaided eye (brighter than magnitude 5.5). Nicolas Louis de Lacaille used the Greek letters Alpha through to Lambda to label the most prominent eleven stars in the constellation, designating two stars as Eta and omitting Iota altogether. Mu Coronae Australis, a yellow star of spectral type G5.5III and apparent magnitude 5.21, was labelled by Johann Elert Bode and retained by Benjamin Gould, who deemed it bright enough to warrant naming.
Stars
The only star in the constellation to have received a name is Alfecca Meridiana or Alpha CrA. The name combines the Arabic name of the constellation with the Latin for "southern". In Arabic, Alfecca means "break", and refers to the shape of both Corona Australis and Corona Borealis. Also called simply "Meridiana", it is a white main sequence star located 125 light years away from Earth, with an apparent magnitude of 4.10 and spectral type A2Va. A rapidly rotating star, it spins at almost 200 km per second at its equator, making a complete revolution in around 14 hours. Like the star Vega, it has excess infrared radiation, which indicates it may be ringed by a disk of dust. It is currently a main-sequence star, but will eventually evolve into a white dwarf; currently, it has a luminosity 31 times greater, and a radius and mass of 2.3 times that of the Sun. Beta Coronae Australis is an orange giant 474 light years from Earth. Its spectral type is K0II, and it is of apparent magnitude 4.11. Since its formation, it has evolved from a B-type star to a K-type star. Its luminosity class places it as a bright giant; its luminosity is 730 times that of the Sun, designating it one of the highest-luminosity K0-type stars visible to the naked eye. 100 million years old, it has a radius of 43 solar radii () and a mass of between 4.5 and 5 solar masses (). Alpha and Beta are so similar as to be indistinguishable in brightness to the naked eye.
Some of the more prominent double stars include Gamma Coronae Australis—a pair of yellowish white stars 58 light years away from Earth, which orbit each other every 122 years. Widening since 1990, the two stars can be seen as separate with a 100 mm aperture telescope; they are separated by 1.3 arcseconds at an angle of 61 degrees. They have a combined visual magnitude of 4.2; each component is an F8V dwarf star with a magnitude of 5.01. Epsilon Coronae Australis is an eclipsing binary belonging to a class of stars known as W Ursae Majoris variables. These star systems are known as contact binaries as the component stars are so close together they touch. Varying by a quarter of a magnitude around an average apparent magnitude of 4.83 every seven hours, the star system lies 98 light years away. Its spectral type is F4VFe-0.8+. At the southern end of the crown asterism are the stars Eta1 and Eta2 CrA, which form an optical double. Of magnitude 5.1 and 5.5, they are separable with the naked eye and are both white. Kappa Coronae Australis is an easily resolved optical double—the components are of apparent magnitudes 6.3 and 5.6 and are about 1000 and 150 light years away respectively. They appear at an angle of 359 degrees, separated by 21.6 arcseconds. Kappa2 is actually the brighter of the pair and is more bluish white, with a spectral type of B9V, while Kappa1 is of spectral type A0III. Lying 202 light years away, Lambda Coronae Australis is a double splittable in small telescopes. The primary is a white star of spectral type A2Vn and magnitude of 5.1, while the companion star has a magnitude of 9.7. The two components are separated by 29.2 arcseconds at an angle of 214 degrees.
Zeta Coronae Australis is a rapidly rotating main sequence star with an apparent magnitude of 4.8, 221.7 light years from Earth. The star has blurred lines in its hydrogen spectrum due to its rotation. Its spectral type is B9V. Theta Coronae Australis lies further to the west, a yellow giant of spectral type G8III and apparent magnitude 4.62. Corona Australis harbours RX J1856.5-3754, an isolated neutron star that is thought to lie 140 (±40) parsecs, or 460 (±130) light years, away, with a diameter of 14 km. It was once suspected to be a strange star, but this has been discounted.
Corona Australis Molecular Cloud
The Corona Australis Molecular Cloud is a dark molecular cloud just north of Beta Coronae Australis. Illuminated by a number of embedded reflection nebulae the cloud fans out from Epsilon Coronae Australis eastward along the constellation border with Sagittarius. It contains , Herbig–Haro objects (protostars) and some very young stars, being one of the closest star-forming regions, 430 light years (130 parsecs) to the Solar System, at the surface of the Local Bubble. The first nebulae of the cloud were recorded in 1865 by Johann Friedrich Julius Schmidt.
Between Epsilon and Gamma Coronae Australis the cloud consists of the particular dark nebula and star forming region Bernes 157. It is 55 by 18 arcminutes wide and possesses several stars around magnitude 13. These stars are dimmed by up to 8 magnitudes because of the obscuring dust clouds. At the center of the active star-forming region lies the Coronet cluster (also called R CrA Cluster), which is used in studying star and protoplanetary disk formation. R Coronae Australis (R CrA) is an irregular variable star ranging from magnitudes 9.7 to 13.9. Blue-white, it is of spectral type B5IIIpe. A very young star, it is still accumulating interstellar material. It is obscured by, and illuminates, the surrounding nebula, NGC 6729, which brightens and darkens with it. The nebula is often compared to a comet for its appearance in a telescope, as its length is five times its width. Other stars of the cluster include S Coronae Australis, a G-class dwarf and T Tauri star.
Nearby north, another young variable star, TY Coronae Australis, illuminates another nebula: reflection nebula NGC 6726/NGC 6727. TY Coronae Australis ranges irregularly between magnitudes 8.7 and 12.4, and the brightness of the nebula varies with it. Blue-white, it is of spectral type B8e. The largest young stars in the region, R, S, T, TY and VV Coronae Australis, are all ejecting jets of material which cause surrounding dust and gas to coalesce and form Herbig–Haro objects, many of which have been identified nearby.
Not part of it is the globular cluster known as NGC 6723, which can be seen adjacent to the nebulosity in the neighbouring constellation of Sagittarius, but is much much further away.
Deep sky objects
IC 1297 is a planetary nebula of apparent magnitude 10.7, which appears as a green-hued roundish object in higher-powered amateur instruments. The nebula surrounds the variable star RU Coronae Australis, which has an average apparent magnitude of 12.9 and is a WC class Wolf–Rayet star. IC 1297 is small, at only 7 arcseconds in diameter; it has been described as "a square with rounded edges" in the eyepiece, elongated in the north–south direction. Descriptions of its color encompass blue, blue-tinged green, and green-tinged blue.
Corona Australis' location near the Milky Way means that galaxies are uncommonly seen. NGC 6768 is a magnitude 11.2 object 35′ south of IC 1297. It is made up of two galaxies merging, one of which is an elongated elliptical galaxy of classification E4 and the other a lenticular galaxy of classification S0. IC 4808 is a galaxy of apparent magnitude 12.9 located on the border of Corona Australis with the neighbouring constellation of Telescopium and 3.9 degrees west-southwest of Beta Sagittarii. However, amateur telescopes will only show a suggestion of its spiral structure. It is 1.9 arcminutes by 0.8 arcminutes. The central area of the galaxy does appear brighter in an amateur instrument, which shows it to be tilted northeast–southwest.
Southeast of Theta and southwest of Eta lies the open cluster ESO 281-SC24, which is composed of the yellow 9th magnitude star GSC 7914 178 1 and five 10th to 11th magnitude stars. Halfway between Theta Coronae Australis and Theta Scorpii is the dense globular cluster NGC 6541. Described as between magnitude 6.3 and magnitude 6.6, it is visible in binoculars and small telescopes. Around 22000 light years away, it is around 100 light years in diameter. It is estimated to be around 14 billion years old. NGC 6541 appears 13.1 arcminutes in diameter and is somewhat resolvable in large amateur instruments; a 12-inch telescope reveals approximately 100 stars but the core remains unresolved.
Meteor showers
The Corona Australids are a meteor shower that takes place between 14 and 18 March each year, peaking around 16 March. This meteor shower does not have a high peak hourly rate. In 1953 and 1956, observers noted a maximum of 6 meteors per hour and 4 meteors per hour respectively; in 1955 the shower was "barely resolved". However, in 1992, astronomers detected a peak rate of 45 meteors per hour. The Corona Australids' rate varies from year to year. At only six days, the shower's duration is particularly short, and its meteoroids are small; the stream is devoid of large meteoroids. The Corona Australids were first seen with the unaided eye in 1935 and first observed with radar in 1955. Corona Australid meteors have an entry velocity of 45 kilometers per second. In 2006, a shower originating near Beta Coronae Australis was designated as the Beta Coronae Australids. They appear in May, the same month as a nearby shower known as the May Microscopids, but the two showers have different trajectories and are unlikely to be related.
History
Corona Australis may have been recorded by ancient Mesopotamians in the MUL.APIN, as a constellation called MA.GUR ("The Bark"). However, this constellation, adjacent to SUHUR.MASH ("The Goat-Fish", modern Capricornus), may instead have been modern Epsilon Sagittarii. As a part of the southern sky, MA.GUR was one of the fifteen "stars of Ea".
In the 3rd century BC, the Greek didactic poet Aratus wrote of, but did not name the constellation, instead calling the two crowns Στεφάνοι (Stephanoi). The Greek astronomer Ptolemy described the constellation in the 2nd century AD, though with the inclusion of Alpha Telescopii, since transferred to Telescopium. Ascribing 13 stars to the constellation, he named it Στεφάνος νοτιος (), "Southern Wreath", while other authors associated it with either Sagittarius (having fallen off his head) or Centaurus; with the former, it was called Corona Sagittarii. Similarly, the Romans called Corona Australis the "Golden Crown of Sagittarius". It was known as Parvum Coelum ("Canopy", "Little Sky") in the 5th century. The 18th-century French astronomer Jérôme Lalande gave it the names Sertum Australe ("Southern Garland") and Orbiculus Capitis, while German poet and author Philippus Caesius called it Corolla ("Little Crown") or Spira Australis ("Southern Coil"), and linked it with the Crown of Eternal Life from the New Testament. Seventeenth-century celestial cartographer Julius Schiller linked it to the Diadem of Solomon. Sometimes, Corona Australis was not the wreath of Sagittarius but arrows held in his hand.
Corona Australis has been associated with the myth of Bacchus and Stimula. Jupiter had impregnated Stimula, causing Juno to become jealous. Juno convinced Stimula to ask Jupiter to appear in his full splendor, which the mortal woman could not handle, causing her to burn. After Bacchus, Stimula's unborn child, became an adult and the god of wine, he honored his deceased mother by placing a wreath in the sky.
In Chinese astronomy, the stars of Corona Australis are located within the Black Tortoise of the North (北方玄武, Běi Fāng Xuán Wǔ). The constellation itself was known as ti'en pieh ("Heavenly Turtle") and during the Western Zhou period, marked the beginning of winter. However, precession over time has meant that the "Heavenly River" (Milky Way) became the more accurate marker to the ancient Chinese and hence supplanted the turtle in this role. Arabic names for Corona Australis include Al Ķubbah "the Tortoise", Al Ĥibā "the Tent" or Al Udḥā al Na'ām "the Ostrich Nest". It was later given the name Al Iklīl al Janūbiyyah, which the European authors Chilmead, Riccioli and Caesius transliterated as Alachil Elgenubi, Elkleil Elgenubi and Aladil Algenubi respectively.
The ǀXam speaking San people of South Africa knew the constellation as ≠nabbe ta !nu "house of branches"—owned originally by the Dassie (rock hyrax), and the star pattern depicting people sitting in a semicircle around a fire.
The indigenous Boorong people of northwestern Victoria saw it as Won, a boomerang thrown by Totyarguil (Altair). The Aranda people of Central Australia saw Corona Australis as a coolamon carrying a baby, which was accidentally dropped to earth by a group of sky-women dancing in the Milky Way. The impact of the coolamon created Gosses Bluff crater, 175 km west of Alice Springs. The Torres Strait Islanders saw Corona Australis as part of a larger constellation encompassing part of Sagittarius and the tip of Scorpius's tail; the Pleiades and Orion were also associated. This constellation was Tagai's canoe, crewed by the Pleiades, called the Usiam, and Orion, called the Seg. The myth of Tagai says that he was in charge of this canoe, but his crewmen consumed all of the supplies onboard without asking permission. Enraged, Tagai bound the Usiam with a rope and tied them to the side of the boat, then threw them overboard. Scorpius's tail represents a suckerfish, while Eta Sagittarii and Theta Corona Australis mark the bottom of the canoe. On the island of Futuna, the figure of Corona Australis was called Tanuma and in the Tuamotus, it was called Na Kaua-ki-Tonga.
See also
Corona Australis (Chinese astronomy)
Chamaeleon complex
References
Citations
Sources
Online sources
SIMBAD
External links
The Deep Photographic Guide to the Constellations: Corona Australis
Warburg Institute Iconographic Database (medieval and early modern images of Corona Australis)
Constellations
Constellations listed by Ptolemy
Mythological clothing
Roman mythology
Southern constellations | Corona Australis | [
"Astronomy"
] | 3,994 | [
"Constellations listed by Ptolemy",
"Southern constellations",
"Constellations",
"Corona Australis",
"Sky regions"
] |
6,432 | https://en.wikipedia.org/wiki/Caelum | Caelum is a faint constellation in the southern sky, introduced in the 1750s by Nicolas Louis de Lacaille and counted among the 88 modern constellations. Its name means "chisel" in Latin, and it was formerly known as Caelum Sculptorium ("Engraver's Chisel"); it is a rare word, unrelated to the far more common Latin caelum, meaning "sky", "heaven", or "atmosphere". It is the eighth-smallest constellation, and subtends a solid angle of around 0.038 steradians, just less than that of Corona Australis.
Due to its small size and location away from the plane of the Milky Way, Caelum is a rather barren constellation, with few objects of interest. The constellation's brightest star, Alpha Caeli, is only of magnitude 4.45, and only one other star, (Gamma) γ1 Caeli, is brighter than magnitude 5 . Other notable objects in Caelum are RR Caeli, a binary star with one known planet approximately away; X Caeli, a Delta Scuti variable that forms an optical double with γ1 Caeli; and HE0450-2958, a Seyfert galaxy that at first appeared as just a jet, with no host galaxy visible.
History
Caelum was incepted as one of fourteen southern constellations in the 18th century by Nicolas Louis de Lacaille, a French astronomer and celebrated of the Age of Enlightenment.
It retains its name Burin among French speakers, latinized in his catalogue of 1763 as Caelum Sculptoris (“Engraver's Chisel”).
Francis Baily shortened this name to Caelum, as suggested by John Herschel. In Lacaille's original chart, it was shown as a pair of engraver's tools: a standard burin and more specific shape-forming échoppe tied by a ribbon, but came to be ascribed a simple chisel. Johann Elert Bode stated the name as plural with a singular possessor, Caela Scalptoris – in German (die ) Grabstichel (“the Engraver’s Chisels”) – but this did not stick.
Characteristics
Caelum is bordered by Dorado and Pictor to the south, Horologium and Eridanus to the east, Lepus to the north, and Columba to the west. Covering only 125 square degrees, it ranks 81st of the 88 modern constellations in size.
Its main asterism consists of four stars, and twenty stars in total are brighter than magnitude 6.5 .
The constellation's boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are a 12-sided polygon. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and and declinations of to . The International Astronomical Union (IAU) adopted the three-letter abbreviation “Cae” for the constellation in 1922.
Its main stars are visible in favourable conditions and with a clear southern horizon, for part of the year as far as about the 41st parallel north
These stars avoid being engulfed by daylight for some of every day (when above the horizon) to viewers in mid- and well-inhabited higher latitudes of the Southern Hemisphere. Caelum shares with (to the north) Taurus, Eridanus and Orion midnight culmination in December (high summer), resulting in this fact. In winter (such as June) the constellation can be observed sufficiently inset from the horizons during its rising before dawn and/or setting after dusk as it culminates then at around mid-day, well above the sun. In South Africa, Argentina, their sub-tropical neighbouring areas and some of Australia in high June the key stars may be traced before dawn in the east; near the equator the stars lose night potential in May to June; they ill-compete with the Sun in northern tropics and sub-tropics from late February to mid-September with March being unfavorable as to post-sunset due to the light of the Milky Way.
Notable features
Stars
Caelum is a faint constellation: It has no star brighter than magnitude 4 and only two stars brighter than magnitude 5.
Lacaille gave six stars Bayer designations, labeling them Alpha (α ) to Zeta (ζ ) in 1756, but omitted Epsilon (ε ) and designated two adjacent stars as Gamma (γ ). Bode extended the designations to Rho (ρ ) for other stars, but most of these have fallen out of use. Caelum is too far south for any of its stars to bear Flamsteed designations.
The brightest star, (Alpha) α Caeli, is a double star, containing an F-type main-sequence star of magnitude 4.45 and a red dwarf of magnitude 12.5 , from Earth. (Beta) β Caeli, another F-type star of magnitude 5.05 , is further away, being located from Earth. Unlike α, β Caeli is a subgiant star, slightly evolved from the main sequence. (Delta) δ Caeli, also of magnitude 5.05 , is a B-type subgiant and is much farther from Earth, at .
(Gamma) γ1Caeli is a double-star with a red giant primary of magnitude 4.58 and a secondary of magnitude 8.1 . The primary is from Earth. The two components are difficult to resolve with small amateur telescopes because of their difference in visual magnitude and their close separation. This star system forms an optical double with the unrelated X Caeli (previously named γ2Caeli), a Delta Scuti variable located from Earth. These are a class of short-period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study astroseismology. The only other variable star in Caelum visible to the naked eye is RV Caeli, a pulsating red giant of spectral type M1III, which varies between magnitudes 6.44 and 6.56 .
Three other stars in Caelum are still occasionally referred to by their Bayer designations, although they are only on the edge of naked-eye visibility. (Nu) ν Caeli is another double star, containing a white giant of magnitude 6.07 and a star of magnitude 10.66, with unknown spectral type. The system is approximately away. (Lambda) λ Caeli, at magnitude 6.24, is much redder and farther away, being a red giant around from Earth. (Zeta) ζ Caeli is even fainter, being only of magnitude 6.36 . This star, located away, is a K-type subgiant of spectral type K1. The other twelve naked-eye stars in Caelum are not referred to by Bode's Bayer designations anymore, including RV Caeli.
One of the nearest stars in Caelum is the eclipsing binary star RR Caeli, at a distance of . This star system consists of a dim red dwarf and a white dwarf. Despite its closeness to the Earth, the system's apparent magnitude is only 14.40 due to the faintness of its components, and thus it cannot be easily seen with amateur equipment. The system is a post-common-envelope binary and is losing angular momentum over time, which will eventually cause mass transfer from the red dwarf to the white dwarf. In approximately 9–20 billion years, this will cause the system to become a cataclysmic variable. In 2012, the system was found to contain a giant planet, and there is evidence for a second substellar body. , it is believed two planets orbit RR Caeli.
Another nearby star is LHS 1678, an astrometric binary located some 65 light-years away. The primary star is a red dwarf hosting three close-in exoplanets, all smaller than Earth, the secondary component is a likely brown dwarf. This system is notable as the closest star to Alpha Caeli, just 3.3 light-years distant. Due to its closeness, α Caeli would shine at magnitude from LHS 1678, brighter than Sirius in our sky.
Deep-sky objects
Due to its small size and location away from the plane of the Milky Way, Caelum is rather devoid of deep-sky objects, and contains no Messier objects. The only deep-sky object in Caelum to receive much attention is HE0450-2958, an unusual Seyfert galaxy. Originally, the jet's host galaxy proved elusive to find, and this jet appeared to be emanating from nothing. Although it has been suggested that the object is an ejected supermassive black hole, the host is now agreed to be a small galaxy that is difficult to see due to light from the jet and a nearby starburst galaxy.
The 13th magnitude planetary nebula PN G243-37.1 is also in the eastern regions of the constellation. It is one of only a few planetary nebulae found in the galactic halo, being light-years below the Milky Way's 1000 light-year-thick disk.
Galaxies NGC 1595, NGC 1598, and the Carafe galaxy are known as the Carafe group. The Carafe galaxy is a Seyfert galaxy with ring. Its location is 4:28 / -47°54' (2000.0).
Notes
References
External links
Starry Night Photography – Caelum Constellation
Southern constellations
Constellations listed by Lacaille | Caelum | [
"Astronomy"
] | 1,962 | [
"Caelum",
"Southern constellations",
"Constellations",
"Constellations listed by Lacaille"
] |
6,435 | https://en.wikipedia.org/wiki/Canes%20Venatici | Canes Venatici ( ) is one of the 88 constellations designated by the International Astronomical Union (IAU). It is a small northern constellation that was created by Johannes Hevelius in the 17th century. Its name is Latin for 'hunting dogs', and the constellation is often depicted in illustrations as representing the dogs of Boötes the Herdsman, a neighboring constellation.
Cor Caroli is the constellation's brightest star, with an apparent magnitude of 2.9. La Superba (Y CVn) is one of the reddest naked-eye stars and one of the brightest carbon stars. The Whirlpool Galaxy is a spiral galaxy tilted face-on to observers on Earth, and was the first galaxy whose spiral nature was discerned. In addition, quasar TON 618 is one of the most massive black holes with the mass of 66 billion solar masses.
History
The stars of Canes Venatici are not bright. In classical times, they were listed by Ptolemy as unfigured stars below the constellation Ursa Major in his star catalogue.
In medieval times, the identification of these stars with the dogs of Boötes arose through a mistranslation: some of Boötes's stars were traditionally described as representing the club (, ) of Boötes. When the Greek astronomer Ptolemy's Almagest was translated from Greek to Arabic, the translator Hunayn ibn Ishaq did not know the Greek word and rendered it as a similar-sounding compound Arabic word for a kind of weapon, writing , which means 'the staff having a hook'.
When the Arabic text was later translated into Latin, the translator, Gerard of Cremona, mistook ('hook') for ('dogs'). Both written words look the same in Arabic text without diacritics, leading Gerard to write it as ('spearshaft-having dogs').
In 1533, the German astronomer Peter Apian depicted Boötes as having two dogs with him.
These spurious dogs floated about the astronomical literature until Hevelius decided to make them a separate constellation in 1687. Hevelius chose the name Asterion for the northern dog and Chara for the southern dog, as , 'the hunting dogs', in his star atlas.
In his star catalogue, the Czech astronomer Antonín Bečvář assigned the names Asterion to β CVn and Chara to α CVn.
Although the International Astronomical Union dropped several constellations in 1930 that were medieval and Renaissance innovations, Canes Venatici survived to become one of the 88 IAU designated constellations.
Neighbors and borders
Canes Venatici is bordered by Ursa Major to the north and west, Coma Berenices to the south, and Boötes to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "CVn". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 14 sides.
In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between +27.84° and +52.36°. Covering 465 square degrees, it ranks 38th of the 88 constellations in size.
Prominent stars and deep-sky objects
Stars
Canes Venatici contains no very bright stars. The Bayer designation stars, Alpha and Beta Canum Venaticorum are only of third and fourth magnitude respectively. Flamsteed catalogued 25 stars in the constellation, labelling them 1 to 25 Canum Venaticorum (CVn); however, 1CVn turned out to be in Ursa Major, 13CVn was in Coma Berenices, and 22CVn did not exist.
Alpha Canum Venaticorum, also known as ('heart of Charles'), is the constellation's brightest star, named by Sir Charles Scarborough in memory of King Charles I, the executed king of Britain. The English astronomer William Henry Smyth wrote in 1844 that α CVn was brighter than usual during the Restoration, as Charles II returned to England to take the throne, but gave no source for this statement, which seems to be apocryphal. Cor Caroli is a wide double star, with a primary of magnitude 2.9 and a secondary of magnitude 5.6; the primary is 110 light-years from Earth. The primary also has an unusually strong variable magnetic field.
Beta Canum Venaticorum, or Chara, is a yellow-hued main sequence star of magnitude 4.25, 27 light-years from Earth. Its common name comes from the word for joy. It has been listed as an astrobiologically interesting star because of its proximity and similarity to the Sun. However, no exoplanets have been discovered around it so far.
Y Canum Venaticorum (La Superba) is a semiregular variable star that varies between magnitudes 5.0 and 6.5 over a period of around 158 days. It is a carbon star and is deep red in color, with a spectral type of C54J(N3).
AM Canum Venaticorum, a very blue star of magnitude 14, is the prototype of a special class of cataclysmic variable stars, in which the companion star is a white dwarf, rather than a main sequence star. It is 143 parsecs distant from the Sun.
RS Canum Venaticorum is the prototype of a special class of binary stars of chromospherically active and optically variable components.
R Canum Venaticorum is a Mira variable that ranges between magnitudes 6.5 and 12.9 over a period of approximately 329 days.
Supervoid
The Giant Void, an extremely large void (part of the universe containing very few galaxies), is within the vicinity of this constellation. It is regarded to be the second largest void ever discovered, slightly larger than the Eridanus Supervoid and smaller than the proposed KBC Void and 1,200 times the volume of expected typical voids. It was discovered in 1988 in a deep-sky survey. Its centre is approximately 1.5 billion light-years away.
Deep-sky objects
Canes Venatici contains five Messier objects, including four galaxies. One of the more significant galaxies in Canes Venatici is the Whirlpool Galaxy (M51, NGC 5194) and NGC 5195, a small barred spiral galaxy that is seen face-on. This was the first galaxy recognised as having a spiral structure, this structure being first observed by Lord Rosse in 1845. It is a face-on spiral galaxy 37 million light-years from Earth. Widely considered to be one of the most beautiful galaxies visible, M51 has many star-forming regions and nebulae in its arms, coloring them pink and blue in contrast to the older yellow core. M 51 has a smaller companion, NGC 5195, that has very few star-forming regions and thus appears yellow. It is passing behind M 51 and may be the cause of the larger galaxy's prodigious star formation.
Other notable spiral galaxies in Canes Venatici are the Sunflower Galaxy (M63, NGC 5055), M94 (NGC 4736), and M106 (NGC 4258).
M63, the Sunflower Galaxy, was named for its appearance in large amateur telescopes. It is a spiral galaxy with an integrated magnitude of 9.0.
M94 (NGC 4736) is a small face-on spiral galaxy with approximate magnitude 8.0, about 15 million light-years from Earth.
NGC 4631 is a barred spiral galaxy, which is one of the largest and brightest edge-on galaxies in the sky.
M3 (NGC 5272) is a globular cluster 32,000 light-years from Earth. It is 18′ in diameter, and at magnitude 6.3 is bright enough to be seen with binoculars. It can even be seen with the naked eye under particularly dark skies.
M94, also cataloged as NGC 4736, is a face-on spiral galaxy 15 million light-years from Earth. It has very tight spiral arms and a bright core. The outskirts of the galaxy are incredibly luminous in the ultraviolet because of a ring of new stars surrounding the core 7,000 light-years in diameter. Though astronomers are not sure what has caused this ring of new stars, some hypothesize that it is from shock waves caused by a bar that is thus far invisible.
Ton 618 is a hyperluminous quasar and blazar in this constellation, near its border with the neighboring Coma Berenices. It possesses a black hole with a mass 66 billion times that of the Sun, making it one of the most massive black holes ever measured. There is also a Lyman-alpha blob.
Footnotes
References
Bibliography
External links
Photos of Canes Venatici and the star clusters and galaxies found within it on AllTheSky.com
Clickable map of Canes Venatici
Photographic catalogue of deep sky objects in Canes Venatici (PDF)
Northern constellations
Constellations listed by Johannes Hevelius | Canes Venatici | [
"Astronomy"
] | 1,914 | [
"Canes Venatici",
"Constellations listed by Johannes Hevelius",
"Constellations",
"Northern constellations"
] |
6,436 | https://en.wikipedia.org/wiki/Chamaeleon | Chamaeleon () is a small constellation in the deep southern sky. It is named after the chameleon, a kind of lizard. It was first defined in the 16th century.
History
Chamaeleon was one of twelve constellations created by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman. It first appeared on a 35-cm diameter celestial globe published in 1597 (or 1598) in Amsterdam by Plancius and Jodocus Hondius. Johann Bayer was the first uranographer to put Chamaeleon in a celestial atlas. It was one of many constellations created by European explorers in the 15th and 16th centuries out of unfamiliar Southern Hemisphere stars.
Features
Stars
There are four bright stars in Chamaeleon that form a compact diamond-shape approximately 10 degrees from the south celestial pole and about 15 degrees south of Acrux, along the axis formed by Acrux and Gamma Crucis. Alpha Chamaeleontis is a white-hued star of magnitude 4.1, 63 light-years from Earth. Beta Chamaeleontis is a blue-white hued star of magnitude 4.2, 271 light-years from Earth. Gamma Chamaeleontis is a red-hued giant star of magnitude 4.1, 413 light-years from Earth. The other bright star in Chamaeleon is Delta Chamaeleontis, a wide double star. The brighter star is Delta2 Chamaeleontis, a blue-hued star of magnitude 4.4. Delta1 Chamaeleontis, the dimmer component, is an orange-hued giant star of magnitude 5.5. They both lie about 350 light years away.
Chamaeleon is also the location of Cha 110913, a unique dwarf star or proto solar system.
Deep-sky objects
In 1999, a nearby open cluster was discovered centered on the star η Chamaeleontis. The cluster, known as either
the Eta Chamaeleontis cluster or Mamajek 1, is 8 million years old, and lies 316 light years from Earth.
The constellation contains a number of molecular clouds (the Chamaeleon dark clouds) that are forming low-mass T Tauri stars. The cloud complex lies some 400 to 600 light years from Earth, and contains tens of thousands of solar masses of gas and dust. The most prominent cluster of T Tauri stars and young B-type stars are in the Chamaeleon I cloud, and are associated with the reflection nebula IC 2631.
Chamaeleon contains one planetary nebula, NGC 3195, which is fairly faint. It appears in a telescope at about the same apparent size as Jupiter.
Equivalents
In Chinese astronomy, the stars that form Chamaeleon were classified as the Little Dipper () among the Southern Asterisms () by Xu Guangqi. Chamaeleon is sometimes also called the Frying Pan in Australia.
See also
Chamaeleon (Chinese astronomy)
IAU-recognized constellations
Citations
References
External links
The Deep Photographic Guide to the Constellations: Chamaeleon
The clickable Chamaeleon
"The eta Chamaeleontis Cluster: A Remarkable New Nearby Young Open Cluster" (Mamajek, Lawson, & Feigelson 1999)
"WEBDA open cluster database entry for Mamajek 1"
Ian Ridpath's Star Tales – Chamaeleon
NGC 3620 Barred spiral galaxy
Southern constellations
Constellations listed by Petrus Plancius | Chamaeleon | [
"Astronomy"
] | 733 | [
"Chamaeleon",
"Constellations listed by Petrus Plancius",
"Southern constellations",
"Constellations"
] |
6,437 | https://en.wikipedia.org/wiki/Cholesterol | Cholesterol is the principal sterol of all higher animals, distributed in body tissues, especially the brain and spinal cord, and in animal fats and oils.
Cholesterol is biosynthesized by all animal cells and is an essential structural and signaling component of animal cell membranes. In vertebrates, hepatic cells typically produce the greatest amounts. In the brain, astrocytes produce cholesterol and transport it to neurons. It is absent among prokaryotes (bacteria and archaea), although there are some exceptions, such as Mycoplasma, which require cholesterol for growth. Cholesterol also serves as a precursor for the biosynthesis of steroid hormones, bile acid and vitamin D.
Elevated levels of cholesterol in the blood, especially when bound to low-density lipoprotein (LDL, often referred to as "bad cholesterol"), may increase the risk of cardiovascular disease.
François Poulletier de la Salle first identified cholesterol in solid form in gallstones in 1769. In 1815, chemist Michel Eugène Chevreul named the compound "cholesterine".
Etymology
The word cholesterol comes from Ancient Greek chole- 'bile' and stereos 'solid', followed by the chemical suffix -ol for an alcohol.
Physiology
Cholesterol is essential for all animal life. While most cells are capable of synthesizing it, the majority of cholesterol is ingested or synthesized by hepatocytes and transported in the blood to peripheral cells. The levels of cholesterol in peripheral tissues are dictated by a balance of uptake and export. Under normal conditions, brain cholesterol is separate from peripheral cholesterol, i.e., the dietary and hepatic cholesterol do not cross the blood brain barrier. Rather, astrocytes produce and distribute cholesterol in the brain.
De novo synthesis, both in astrocytes and hepatocytes, occurs by a complex 37-step process. This begins with the mevalonate or HMG-CoA reductase pathway, the target of statin drugs, which encompasses the first 18 steps. This is followed by 19 additional steps to convert the resulting lanosterol into cholesterol. A human male weighing 68 kg (150 lb) normally synthesizes about 1 gram (1,000 mg) of cholesterol per day, and his body contains about 35 g, mostly contained within the cell membranes.
Typical daily cholesterol dietary intake for a man in the United States is 307 mg. Most ingested cholesterol is esterified, which causes it to be poorly absorbed by the gut. The body also compensates for absorption of ingested cholesterol by reducing its own cholesterol synthesis. For these reasons, cholesterol in food, seven to ten hours after ingestion, has little, if any effect on concentrations of cholesterol in the blood. Surprisingly, in rats, blood cholesterol is inversely correlated with cholesterol consumption. The more cholesterol a rat eats the lower the blood cholesterol. During the first seven hours after ingestion of cholesterol, as absorbed fats are being distributed around the body within extracellular water by the various lipoproteins (which transport all fats in the water outside cells), the concentrations increase.
Plants make cholesterol in very small amounts. In larger quantities they produce phytosterols, chemically similar substances which can compete with cholesterol for reabsorption in the intestinal tract, thus potentially reducing cholesterol reabsorption. When intestinal lining cells absorb phytosterols, in place of cholesterol, they usually excrete the phytosterol molecules back into the GI tract, an important protective mechanism. The intake of naturally occurring phytosterols, which encompass plant sterols and stanols, ranges between ≈200–300 mg/day depending on eating habits. Specially designed vegetarian experimental diets have been produced yielding upwards of 700 mg/day.
Function
Membranes
Cholesterol is present in varying degrees in all animal cell membranes, but is absent in prokaryotes. It is required to build and maintain membranes and modulates membrane fluidity over the range of physiological temperatures. The hydroxyl group of each cholesterol molecule interacts with water molecules surrounding the membrane, as do the polar heads of the membrane phospholipids and sphingolipids, while the bulky steroid and the hydrocarbon chain are embedded in the membrane, alongside the nonpolar fatty-acid chain of the other lipids. Through the interaction with the phospholipid fatty-acid chains, cholesterol increases membrane packing, which both alters membrane fluidity and maintains membrane integrity so that animal cells do not need to build cell walls (like plants and most bacteria). The membrane remains stable and durable without being rigid, allowing animal cells to change shape and animals to move.
The structure of the tetracyclic ring of cholesterol contributes to the fluidity of the cell membrane, as the molecule is in a trans conformation making all but the side chain of cholesterol rigid and planar. In this structural role, cholesterol also reduces the permeability of the plasma membrane to neutral solutes, hydrogen ions, and sodium ions.
Substrate presentation
Cholesterol regulates the biological process of substrate presentation and the enzymes that use substrate presentation as a mechanism of their activation. Phospholipase D2 (PLD2) is a well-defined example of an enzyme activated by substrate presentation. The enzyme is palmitoylated causing the enzyme to traffic to cholesterol dependent lipid domains sometimes called "lipid rafts". The substrate of phospholipase D is phosphatidylcholine (PC) which is unsaturated and is of low abundance in lipid rafts. PC localizes to the disordered region of the cell along with the polyunsaturated lipid phosphatidylinositol 4,5-bisphosphate (PIP2). PLD2 has a PIP2 binding domain. When PIP2 concentration in the membrane increases, PLD2 leaves the cholesterol-dependent domains and binds to PIP2 where it then gains access to its substrate PC and commences catalysis based on substrate presentation.
Signaling
Cholesterol is also implicated in cell signaling processes, assisting in the formation of lipid rafts in the plasma membrane, which brings receptor proteins in close proximity with high concentrations of second messenger molecules. In multiple layers, cholesterol and phospholipids, both electrical insulators, can facilitate speed of transmission of electrical impulses along nerve tissue. For many neuron fibers, a myelin sheath, rich in cholesterol since it is derived from compacted layers of Schwann cell or oligodendrocyte membranes, provides insulation for more efficient conduction of impulses. Demyelination (loss of myelin) is believed to be part of the basis for multiple sclerosis.
Cholesterol binds to and affects the gating of a number of ion channels such as the nicotinic acetylcholine receptor, GABAA receptor, and the inward-rectifier potassium channel. Cholesterol also activates the estrogen-related receptor alpha (ERRα), and may be the endogenous ligand for the receptor. The constitutively active nature of the receptor may be explained by the fact that cholesterol is ubiquitous in the body. Inhibition of ERRα signaling by reduction of cholesterol production has been identified as a key mediator of the effects of statins and bisphosphonates on bone, muscle, and macrophages. On the basis of these findings, it has been suggested that the ERRα should be de-orphanized and classified as a receptor for cholesterol.
As a chemical precursor
Within cells, cholesterol is also a precursor molecule for several biochemical pathways. For example, it is the precursor molecule for the synthesis of vitamin D in the calcium metabolism and all steroid hormones, including the adrenal gland hormones cortisol and aldosterone, as well as the sex hormones progesterone, estrogens, and testosterone, and their derivatives.
Epidermis
The stratum corneum is the outermost layer of the epidermis. It is composed of terminally differentiated and enucleated corneocytes that reside within a lipid matrix, like "bricks and mortar." Together with ceramides and free fatty acids, cholesterol forms the lipid mortar, a water-impermeable barrier that prevents evaporative water loss. As a rule of thumb, the epidermal lipid matrix is composed of an equimolar mixture of ceramides (≈50% by weight), cholesterol (≈25% by weight), and free fatty acids (≈15% by weight), with smaller quantities of other lipids also being present. Cholesterol sulfate reaches its highest concentration in the granular layer of the epidermis. Steroid sulfate sulfatase then decreases its concentration in the stratum corneum, the outermost layer of the epidermis. The relative abundance of cholesterol sulfate in the epidermis varies across different body sites with the heel of the foot having the lowest concentration.
Metabolism
Cholesterol is recycled in the body. The liver excretes cholesterol into biliary fluids, which are then stored in the gallbladder, which then excretes them in a non-esterified form (via bile) into the digestive tract. Typically, about 50% of the excreted cholesterol is reabsorbed by the small intestine back into the bloodstream.
Biosynthesis and regulation
Biosynthesis
Almost all animal tissues synthesize cholesterol from acetyl-CoA. All animal cells (exceptions exist within the invertebrates) manufacture cholesterol, for both membrane structure and other uses, with relative production rates varying by cell type and organ function. About 80% of total daily cholesterol production occurs in the liver and the intestines; other sites of higher synthesis rates include the brain, the adrenal glands, and the reproductive organs.
Synthesis within the body starts with the mevalonate pathway where two molecules of acetyl CoA condense to form acetoacetyl-CoA. This is followed by a second condensation between acetyl CoA and acetoacetyl-CoA to form 3-hydroxy-3-methylglutaryl CoA (HMG-CoA).
This molecule is then reduced to mevalonate by the enzyme HMG-CoA reductase. Production of mevalonate is the rate-limiting and irreversible step in cholesterol synthesis and is the site of action for statins (a class of cholesterol-lowering drugs).
Mevalonate is finally converted to isopentenyl pyrophosphate (IPP) through two phosphorylation steps and one decarboxylation step that requires ATP.
Three molecules of isopentenyl pyrophosphate condense to form farnesyl pyrophosphate through the action of geranyl transferase.
Two molecules of farnesyl pyrophosphate then condense to form squalene by the action of squalene synthase in the endoplasmic reticulum.
Oxidosqualene cyclase then cyclizes squalene to form lanosterol.
Finally, lanosterol is converted to cholesterol via either of two pathways, the Bloch pathway, or the Kandutsch-Russell pathway.
The final 19 steps to cholesterol contain NADPH and oxygen to help oxidize methyl groups for removal of carbons, mutases to move alkene groups, and NADH to help reduce ketones.
Konrad Bloch and Feodor Lynen shared the Nobel Prize in Physiology or Medicine in 1964 for their discoveries concerning some of the mechanisms and methods of regulation of cholesterol and fatty acid metabolism.
Regulation of cholesterol synthesis
Biosynthesis of cholesterol is directly regulated by the cholesterol levels present, though the homeostatic mechanisms involved are only partly understood. A higher intake of food leads to a net decrease in endogenous production, whereas a lower intake of food has the opposite effect. The main regulatory mechanism is the sensing of intracellular cholesterol in the endoplasmic reticulum by the protein SREBP (sterol regulatory element-binding protein 1 and 2). In the presence of cholesterol, SREBP is bound to two other proteins: SCAP (SREBP cleavage-activating protein) and INSIG-1. When cholesterol levels fall, INSIG-1 dissociates from the SREBP-SCAP complex, which allows the complex to migrate to the Golgi apparatus. Here SREBP is cleaved by S1P and S2P (site-1 protease and site-2 protease), two enzymes that are activated by SCAP when cholesterol levels are low.
The cleaved SREBP then migrates to the nucleus and acts as a transcription factor to bind to the sterol regulatory element (SRE), which stimulates the transcription of many genes. Among these are the low-density lipoprotein (LDL) receptor and HMG-CoA reductase. The LDL receptor scavenges circulating LDL from the bloodstream, whereas HMG-CoA reductase leads to an increase in endogenous production of cholesterol. A large part of this signaling pathway was clarified by Dr. Michael S. Brown and Dr. Joseph L. Goldstein in the 1970s. In 1985, they received the Nobel Prize in Physiology or Medicine for their work. Their subsequent work shows how the SREBP pathway regulates the expression of many genes that control lipid formation and metabolism and body fuel allocation.
Cholesterol synthesis can also be turned off when cholesterol levels are high. HMG-CoA reductase contains both a cytosolic domain (responsible for its catalytic function) and a membrane domain. The membrane domain senses signals for its degradation. Increasing concentrations of cholesterol (and other sterols) cause a change in this domain's oligomerization state, which makes it more susceptible to destruction by the proteasome. This enzyme's activity can also be reduced by phosphorylation by an AMP-activated protein kinase. Because this kinase is activated by AMP, which is produced when ATP is hydrolyzed, it follows that cholesterol synthesis is halted when ATP levels are low.
Plasma transport and regulation of absorption
As an isolated molecule, cholesterol is only minimally soluble in water, or hydrophilic. Because of this, it dissolves in blood at exceedingly small concentrations. To be transported effectively, cholesterol is instead packaged within lipoproteins, complex discoidal particles with exterior amphiphilic proteins and lipids, whose outward-facing surfaces are water-soluble and inward-facing surfaces are lipid-soluble. This allows it to travel through the blood via emulsification. Unbound cholesterol, being amphipathic, is transported in the monolayer surface of the lipoprotein particle along with phospholipids and proteins. Cholesterol esters bound to fatty acid, on the other hand, are transported within the fatty hydrophobic core of the lipoprotein, along with triglyceride.
There are several types of lipoproteins in the blood. In order of increasing density, they are chylomicrons, very-low-density lipoprotein (VLDL), intermediate-density lipoprotein (IDL), low-density lipoprotein (LDL), and high-density lipoprotein (HDL). Lower protein/lipid ratios make for less dense lipoproteins. Cholesterol within different lipoproteins is identical, although some is carried as its native "free" alcohol form (the cholesterol-OH group facing the water surrounding the particles), while others as fatty acyl esters, known also as cholesterol esters, within the particles.
Lipoprotein particles are organized by complex apolipoproteins, typically 80–100 different proteins per particle, which can be recognized and bound by specific receptors on cell membranes, directing their lipid payload into specific cells and tissues currently ingesting these fat transport particles. These surface receptors serve as unique molecular signatures, which then help determine fat distribution delivery throughout the body.
Chylomicrons, the least dense cholesterol transport particles, contain apolipoprotein B-48, apolipoprotein C, and apolipoprotein E (the principal cholesterol carrier in the brain) in their shells. Chylomicrons carry fats from the intestine to muscle and other tissues in need of fatty acids for energy or fat production. Unused cholesterol remains in more cholesterol-rich chylomicron remnants and is taken up from here to the bloodstream by the liver.
VLDL particles are produced by the liver from triacylglycerol and cholesterol which was not used in the synthesis of bile acids. These particles contain apolipoprotein B100 and apolipoprotein E in their shells and can be degraded by lipoprotein lipase on the artery wall to IDL. This arterial wall cleavage allows absorption of triacylglycerol and increases the concentration of circulating cholesterol. IDL particles are then consumed in two processes: half is metabolized by HTGL and taken up by the LDL receptor on the liver cell surfaces, while the other half continues to lose triacylglycerols in the bloodstream until they become cholesterol-laden LDL particles.
LDL particles are the major blood cholesterol carriers. Each one contains approximately 1,500 molecules of cholesterol ester. LDL particle shells contain just one molecule of apolipoprotein B100, recognized by LDL receptors in peripheral tissues. Upon binding of apolipoprotein B100, many LDL receptors concentrate in clathrin-coated pits. Both LDL and its receptor form vesicles within a cell via endocytosis. These vesicles then fuse with a lysosome, where the lysosomal acid lipase enzyme hydrolyzes the cholesterol esters. The cholesterol can then be used for membrane biosynthesis or esterified and stored within the cell, so as to not interfere with the cell membranes.
LDL receptors are used up during cholesterol absorption, and its synthesis is regulated by SREBP, the same protein that controls the synthesis of cholesterol de novo, according to its presence inside the cell. A cell with abundant cholesterol will have its LDL receptor synthesis blocked, to prevent new cholesterol in LDL particles from being taken up. Conversely, LDL receptor synthesis proceeds when a cell is deficient in cholesterol.
When this process becomes unregulated, LDL particles without receptors begin to appear in the blood. These LDL particles are oxidized and taken up by macrophages, which become engorged and form foam cells. These foam cells often become trapped in the walls of blood vessels and contribute to atherosclerotic plaque formation. Differences in cholesterol homeostasis affect the development of early atherosclerosis (carotid intima-media thickness). These plaques are the main causes of heart attacks, strokes, and other serious medical problems, leading to the association of so-called LDL cholesterol (actually a lipoprotein) with "bad" cholesterol.
HDL particles are thought to transport cholesterol back to the liver, either for excretion or for other tissues that synthesize hormones, in a process known as reverse cholesterol transport (RCT). Large numbers of HDL particles correlates with better health outcomes, whereas low numbers of HDL particles is associated with atheromatous disease progression in the arteries.
Metabolism, recycling and excretion
Cholesterol is susceptible to oxidation and easily forms oxygenated derivatives called oxysterols. Three different mechanisms can form these: autoxidation, secondary oxidation to lipid peroxidation, and cholesterol-metabolizing enzyme oxidation. A great interest in oxysterols arose when they were shown to exert inhibitory actions on cholesterol biosynthesis. This finding became known as the "oxysterol hypothesis". Additional roles for oxysterols in human physiology include their participation in bile acid biosynthesis, function as transport forms of cholesterol, and regulation of gene transcription.
In biochemical experiments, radiolabelled forms of cholesterol, such as tritiated-cholesterol, are used. These derivatives undergo degradation upon storage, and it is essential to purify cholesterol prior to use. Cholesterol can be purified using small Sephadex LH-20 columns.
Cholesterol is oxidized by the liver into a variety of bile acids. These, in turn, are conjugated with glycine, taurine, glucuronic acid, or sulfate. A mixture of conjugated and nonconjugated bile acids, along with cholesterol itself, is excreted from the liver into the bile. Approximately 95% of the bile acids are reabsorbed from the intestines, and the remainder are lost in the feces. The excretion and reabsorption of bile acids forms the basis of the enterohepatic circulation, which is essential for the digestion and absorption of dietary fats. Under certain circumstances, when more concentrated, as in the gallbladder, cholesterol crystallises and is the major constituent of most gallstones (lecithin and bilirubin gallstones also occur, but less frequently). Every day, up to 1 g of cholesterol enters the colon. This cholesterol originates from the diet, bile, and desquamated intestinal cells, and it can be metabolized by the colonic bacteria. Cholesterol is converted mainly into coprostanol, a nonabsorbable sterol that is excreted in the feces.
Although cholesterol is a steroid generally associated with mammals, the human pathogen Mycobacterium tuberculosis is able to completely degrade this molecule and contains a large number of genes that are regulated by its presence. Many of these cholesterol-regulated genes are homologues of fatty acid β-oxidation genes, but have evolved in such a way as to bind large steroid substrates like cholesterol.
Dietary sources
Animal fats are complex mixtures of triglycerides, with lesser amounts of both the phospholipids and cholesterol molecules from which all animal (and human) cell membranes are constructed. Since all animal cells manufacture cholesterol, all animal-based foods contain cholesterol in varying amounts. Major dietary sources of cholesterol include red meat, egg yolks and whole eggs, liver, kidney, giblets, fish oil, shellfish, and butter. Human breast milk also contains significant quantities of cholesterol.
Plant cells synthesize cholesterol as a precursor for other compounds, such as phytosterols and steroidal glycoalkaloids, with cholesterol remaining in plant foods only in minor amounts or absent. Some plant foods, such as avocado, flax seeds and peanuts, contain phytosterols, which compete with cholesterol for absorption in the intestines and reduce the absorption of both dietary and bile cholesterol. A typical diet contributes on the order of 0.2 gram of phytosterols, which is not enough to have a significant impact on blocking cholesterol absorption. Phytosterols intake can be supplemented through the use of phytosterol-containing functional foods or dietary supplements that are recognized as having potential to reduce levels of LDL-cholesterol.
Medical guidelines and recommendations
In 2015, the scientific advisory panel of U.S. Department of Health and Human Services and U.S. Department of Agriculture for the 2015 iteration of the Dietary Guidelines for Americans dropped the previously recommended limit of consumption of dietary cholesterol to 300 mg per day with a new recommendation to "eat as little dietary cholesterol as possible", thereby acknowledging an association between a diet low in cholesterol and reduced risk of cardiovascular disease.
A 2013 report by the American Heart Association and the American College of Cardiology recommended focusing on healthy dietary patterns rather than specific cholesterol limits, as they are hard for clinicians and consumers to implement. They recommend the DASH and Mediterranean diet, which are low in cholesterol. A 2017 review by the American Heart Association recommends switching saturated fats for polyunsaturated fats to reduce cardiovascular disease risk.
Some supplemental guidelines have recommended doses of phytosterols in the 1.6–3.0 grams per day range (Health Canada, EFSA, ATP III, FDA). A meta-analysis demonstrated a 12% reduction in LDL-cholesterol at a mean dose of 2.1 grams per day. The benefits of a diet supplemented with phytosterols have also been questioned.
Clinical significance
Hypercholesterolemia
According to the lipid hypothesis, elevated levels of cholesterol in the blood lead to atherosclerosis which may increase the risk of heart attack, stroke, and peripheral artery disease. Since higher blood LDL – especially higher LDL concentrations and smaller LDL particle size – contributes to this process more than the cholesterol content of the HDL particles, LDL particles are often termed "bad cholesterol". High concentrations of functional HDL, which can remove cholesterol from cells and atheromas, offer protection and are commonly referred to as "good cholesterol". These balances are mostly genetically determined, but can be changed by body composition, medications, diet, and other factors. A 2007 study demonstrated that blood total cholesterol levels have an exponential effect on cardiovascular and total mortality, with the association more pronounced in younger subjects. Because cardiovascular disease is relatively rare in the younger population, the impact of high cholesterol on health is larger in older people.
Elevated levels of the lipoprotein fractions, LDL, IDL and VLDL, rather than the total cholesterol level, correlate with the extent and progress of atherosclerosis. Conversely, the total cholesterol can be within normal limits, yet be made up primarily of small LDL and small HDL particles, under which conditions atheroma growth rates are high. A post hoc analysis of the IDEAL and the EPIC prospective studies found an association between high levels of HDL cholesterol (adjusted for apolipoprotein A-I and apolipoprotein B) and increased risk of cardiovascular disease, casting doubt on the cardioprotective role of "good cholesterol".
About one in 250 individuals can have a genetic mutation for the LDL cholesterol receptor that causes them to have familial hypercholesterolemia. Inherited high cholesterol can also include genetic mutations in the PCSK9 gene and the gene for apolipoprotein B.
Elevated cholesterol levels are treatable by a diet that reduces or eliminates saturated fat, and trans fats, often followed by one of various hypolipidemic agents, such as statins, fibrates, cholesterol absorption inhibitors, monoclonal antibody therapy (PCSK9 inhibitors), nicotinic acid derivatives or bile acid sequestrants. There are several international guidelines on the treatment of hypercholesterolemia.
Human trials using HMG-CoA reductase inhibitors, known as statins, have repeatedly confirmed that changing lipoprotein transport patterns from unhealthy to healthier patterns significantly lowers cardiovascular disease event rates, even for people with cholesterol values currently considered low for adults. Studies have shown that reducing LDL cholesterol levels by about 38.7 mg/dL with the use of statins can reduce cardiovascular disease and stroke risk by about 21%. Studies have also found that statins reduce atheroma progression. As a result, people with a history of cardiovascular disease may derive benefit from statins irrespective of their cholesterol levels (total cholesterol below 5.0 mmol/L [193 mg/dL]), and in men without cardiovascular disease, there is benefit from lowering abnormally high cholesterol levels ("primary prevention"). Primary prevention in women was originally practiced only by extension of the findings in studies on men, since, in women, none of the large statin trials conducted prior to 2007 demonstrated a significant reduction in overall mortality or in cardiovascular endpoints. Meta-analyses have demonstrated significant reductions in all-cause and cardiovascular mortality, without significant heterogeneity by sex.
The 1987 report of National Cholesterol Education Program, Adult Treatment Panels suggests the total blood cholesterol level should be: < 200 mg/dL normal blood cholesterol, 200–239 mg/dL borderline-high, > 240 mg/dL high cholesterol. The American Heart Association provides a similar set of guidelines for total (fasting) blood cholesterol levels and risk for heart disease: Statins are effective in lowering LDL cholesterol and widely used for primary prevention in people at high risk of cardiovascular disease, as well as in secondary prevention for those who have developed cardiovascular disease. The average global mean total Cholesterol for humans has remained at about 4.6 mmol/L (178 mg/dL) for men and women, both crude and age standardized, for nearly 40 years from 1980 to 2018, with some regional variations and reduction of total Cholesterol in Western nations.
More current testing methods determine LDL ("bad") and HDL ("good") cholesterol separately, allowing cholesterol analysis to be more nuanced. The desirable LDL level is considered to be less than 100 mg/dL (2.6 mmol/L).
Total cholesterol is defined as the sum of HDL, LDL, and VLDL. Usually, only the total, HDL, and triglycerides are measured. For cost reasons, the VLDL is usually estimated as one-fifth of the triglycerides and the LDL is estimated using the Friedewald formula (or a variant): estimated LDL = [total cholesterol] − [total HDL] − [estimated VLDL]. Direct LDL measures are used when triglycerides exceed 400 mg/dL. The estimated VLDL and LDL have more error when triglycerides are above 400 mg/dL.
In the Framingham Heart Study, each 10 mg/dL (0.6 mmol/L) increase in total cholesterol levels increased 30-year overall mortality by 5% and CVD mortality by 9%. While subjects over the age of 50 had an 11% increase in overall mortality, and a 14% increase in cardiovascular disease mortality per 1 mg/dL (0.06 mmol/L) year drop in total cholesterol levels. The researchers attributed this phenomenon to a different correlation, whereby the disease itself increases risk of death, as well as changes a myriad of factors, such as weight loss and the inability to eat, which lower serum cholesterol. This effect was also shown in men of all ages and women over 50 in the Vorarlberg Health Monitoring and Promotion Programme. These groups were more likely to die of cancer, liver diseases, and mental diseases with very low total cholesterol, of 186 mg/dL (10.3 mmol/L) and lower. This result indicates the low-cholesterol effect occurs even among younger respondents, contradicting the previous assessment among cohorts of older people that this is a marker for frailty occurring with age.
Hypocholesterolemia
Abnormally low levels of cholesterol are termed hypocholesterolemia. Research into the causes of this state is relatively limited, but some studies suggest a link with depression, cancer, and cerebral hemorrhage. In general, the low cholesterol levels seem to be a consequence, rather than a cause, of an underlying illness. A genetic defect in cholesterol synthesis causes Smith–Lemli–Opitz syndrome, which is often associated with low plasma cholesterol levels. Hyperthyroidism, or any other endocrine disturbance which causes upregulation of the LDL receptor, may result in hypocholesterolemia.
Testing
The American Heart Association recommends testing cholesterol every 4–6 years for people aged 20 years or older. A separate set of American Heart Association guidelines issued in 2013 indicates that people taking statin medications should have their cholesterol tested 4–12 weeks after their first dose and then every 3–12 months thereafter. For men ages 45 to 65 and women ages 55 to 65, a cholesterol test should occur every 1–2 years, and for seniors over age 65, an annual test should be performed.
A blood sample after 12-hours of fasting is taken by a healthcare professional from an arm vein to measure a lipid profile for a) total cholesterol, b) HDL cholesterol, c) LDL cholesterol, and d) triglycerides. Results may be expressed as "calculated", indicating a calculation of total cholesterol, HDL, and triglycerides.
Cholesterol is tested to determine for "normal" or "desirable" levels if a person has a total cholesterol of 5.2 mmol/L or less (200 mg/dL), an HDL value of more than 1 mmol/L (40 mg/dL, "the higher, the better"), an LDL value of less than 2.6 mmol/L (100 mg/dL), and a triglycerides level of less than 1.7 mmol/L (150 mg/dL). Blood cholesterol in people with lifestyle, aging, or cardiovascular risk factors, such as diabetes mellitus, hypertension, family history of coronary artery disease, or angina, are evaluated at different levels.
Interactive pathway map
Cholesteric liquid crystals
Some cholesterol derivatives (among other simple cholesteric lipids) are known to generate the liquid crystalline "cholesteric phase". The cholesteric phase is, in fact, a chiral nematic phase, and it changes colour when its temperature changes. This makes cholesterol derivatives useful for indicating temperature in liquid-crystal display thermometers and in temperature-sensitive paints.
Stereoisomers
Cholesterol has 256 stereoisomers that arise from its eight stereocenters, although only two of the stereoisomers have biochemical significance (nat-cholesterol and ent-cholesterol, for natural and enantiomer, respectively), and only one occurs naturally (nat-cholesterol).
Additional images
See also
Arcus senilis "Cholesterol ring" in the eyes
References
External links
Cholestanes
GABAA receptor positive allosteric modulators
Lipid disorders
Neurosteroids
Nutrition
Receptor agonists
Sterols | Cholesterol | [
"Chemistry"
] | 7,471 | [
"Receptor agonists",
"Neurochemistry"
] |
6,445 | https://en.wikipedia.org/wiki/Carcinogen | A carcinogen () is any agent that promotes the development of cancer. Carcinogens can include synthetic chemicals, naturally occurring substances, physical agents such as ionizing and non-ionizing radiation, and biologic agents such as viruses and bacteria. Most carcinogens act by creating mutations in DNA that disrupt a cell's normal processes for regulating growth, leading to uncontrolled cellular proliferation. This occurs when the cell's DNA repair processes fail to identify DNA damage allowing the defect to be passed down to daughter cells. The damage accumulates over time. This is typically a multi-step process during which the regulatory mechanisms within the cell are gradually dismantled allowing for unchecked cellular division.
The specific mechanisms for carcinogenic activity is unique to each agent and cell type. Carcinogens can be broadly categorized, however, as activation-dependent and activation-independent which relate to the agent's ability to engage directly with DNA. Activation-dependent agents are relatively inert in their original form, but are bioactivated in the body into metabolites or intermediaries capable of damaging human DNA. These are also known as "indirect-acting" carcinogens. Examples of activation-dependent carcinogens include polycyclic aromatic hydrocarbons (PAHs), heterocyclic aromatic amines, and mycotoxins. Activation-independent carcinogens, or "direct-acting" carcinogens, are those that are capable of directly damaging DNA without any modification to their molecular structure. These agents typically include electrophilic groups that react readily with the net negative charge of DNA molecules. Examples of activation-independent carcinogens include ultraviolet light, ionizing radiation and alkylating agents.
The time from exposure to a carcinogen to the development of cancer is known as the latency period. For most solid tumors in humans the latency period is between 10 and 40 years depending on cancer type. For blood cancers, the latency period may be as short as two. Due to prolonged latency periods identification of carcinogens can be challenging.
A number of organizations review and evaluate the cumulative scientific evidence regarding the potential carcinogenicity of specific substances. Foremost among these is the International Agency for Research on Cancer (IARC). IARC routinely publishes monographs in which specific substances are evaluated for their potential carcinogenicity to humans and subsequently categorized into one of four groupings: Group 1: Carcinogenic to humans, Group 2A: Probably carcinogenic to humans, Group 2B: Possibly carcinogenic to humans and Group 3: Not classifiable as to its carcinogenicity to humans. Other organizations that evaluate the carcinogenicity of substances include the National Toxicology Program of the US Public Health Service, NIOSH, the American Conference of Governmental Industrial Hygienists and others.
There are numerous sources of exposures to carcinogens including ultraviolet radiation from the sun, radon gas emitted in residential basements, environmental contaminants such as chlordecone, cigarette smoke and ingestion of some types of foods such as alcohol and processed meats. Occupational exposures represent a major source of carcinogens with an estimated 666,000 annual fatalities worldwide attributable to work related cancers. According to NIOSH, 3-6% of cancers worldwide are due to occupational exposures. Well established occupational carcinogens include vinyl chloride and hemangiosarcoma of the liver, benzene and leukemia, aniline dyes and bladder cancer, asbestos and mesothelioma, polycyclic aromatic hydrocarbons and scrotal cancer among chimney sweeps to name a few.
Radiation
Ionizing Radiation
CERCLA identifies all radionuclides as carcinogens, although the nature of the emitted radiation (alpha, beta, gamma, or neutron and the radioactive strength), its consequent capacity to cause ionization in tissues, and the magnitude of radiation exposure, determine the potential hazard. Carcinogenicity of radiation depends on the type of radiation, type of exposure, and penetration. For example, alpha radiation has low penetration and is not a hazard outside the body, but emitters are carcinogenic when inhaled or ingested. For example, Thorotrast, a (incidentally radioactive) suspension previously used as a contrast medium in x-ray diagnostics, is a potent human carcinogen known because of its retention within various organs and persistent emission of alpha particles. Low-level ionizing radiation may induce irreparable DNA damage (leading to replicational and transcriptional errors needed for neoplasia or may trigger viral interactions) leading to pre-mature aging and cancer.
Non-ionizing radiation
Not all types of electromagnetic radiation are carcinogenic. Low-energy waves on the electromagnetic spectrum including radio waves, microwaves, infrared radiation and visible light are thought not to be, because they have insufficient energy to break chemical bonds. Evidence for carcinogenic effects of non-ionizing radiation is generally inconclusive, though there are some documented cases of radar technicians with prolonged high exposure experiencing significantly higher cancer incidence.
Higher-energy radiation, including ultraviolet radiation (present in sunlight) generally is carcinogenic, if received in sufficient doses. For most people, ultraviolet radiations from sunlight is the most common cause of skin cancer. In Australia, where people with pale skin are often exposed to strong sunlight, melanoma is the most common cancer diagnosed in people aged 15–44 years.
Substances or foods irradiated with electrons or electromagnetic radiation (such as microwave, X-ray or gamma) are not carcinogenic. In contrast, non-electromagnetic neutron radiation produced inside nuclear reactors can produce secondary radiation through nuclear transmutation.
Common carcinogens associated with food
Alcohol
Alcohol is a carcinogen of the head and neck, esophagus, liver, colon and rectum, and breast. It has a synergistic effect with tobacco smoke in the development of head and neck cancers. In the United States approximately 6% of cancers and 4% of cancer deaths are attributable to alcohol use.
Processed meats
Chemicals used in processed and cured meat such as some brands of bacon, sausages and ham may produce carcinogens. For example, nitrites used as food preservatives in cured meat such as bacon have also been noted as being carcinogenic with demographic links, but not causation, to colon cancer.
Meats cooked at high temperatures
Cooking food at high temperatures, for example grilling or barbecuing meats, may also lead to the formation of minute quantities of many potent carcinogens that are comparable to those found in cigarette smoke (i.e., benzo[a]pyrene). Charring of food looks like coking and tobacco pyrolysis, and produces carcinogens. There are several carcinogenic pyrolysis products, such as polynuclear aromatic hydrocarbons, which are converted by human enzymes into epoxides, which attach permanently to DNA. Pre-cooking meats in a microwave oven for 2–3 minutes before grilling shortens the time on the hot pan, and removes heterocyclic amine (HCA) precursors, which can help minimize the formation of these carcinogens.
Acrylamide in foods
Frying, grilling or broiling food at high temperatures, especially starchy foods, until a toasted crust is formed generates acrylamides. This discovery in 2002 led to international health concerns. Subsequent research has however found that it is not likely that the acrylamides in burnt or well-cooked food cause cancer in humans; Cancer Research UK categorizes the idea that burnt food causes cancer as a "myth".
Biologic Agents
Several biologic agents are known carcinogens.
Aflatoxin B1, a toxin produced by the fungus Aspergillus flavus which is a common contaminant of stored grains and nuts is a known cause of hepatocellular cancer. The bacteria H. Pylori is known to cause stomach cancer and MALT lymphoma. Hepatitis B and C are associated with the development of hepatocellular cancer. HPV is the primary cause of cervical cancer.
Cigarette smoke
Tobacco smoke contains at least 70 known carcinogens and is implicated in the development of numerous types of cancers including cancers of the lung, larynx, esophagus, stomach, kidney, pancreas, liver, bladder, cervix, colon, rectum and blood. Potent carcinogens found in cigarette smoke include polycyclic aromatic hydrocarbons (PAH, such as benzo(a)pyrene), benzene, and nitrosamine.
Occupational carcinogens
Given that populations of workers are more likely to have consistent, often high level exposures to chemicals rarely encountered in normal life, much of the evidence for the carcinogenicity of specific agents is derived from studies of workers.
Selected carcinogens
Others
Gasoline (contains aromatics)
Lead and its compounds
Alkylating antineoplastic agents (e.g., mechlorethamine)
Styrene
Other alkylating agents (e.g., dimethyl sulfate)
Ultraviolet radiation from the sun and UV lamps
Other ionizing radiation (X-rays, gamma rays, etc.)
Low refining or unrefined mineral oils
Mechanisms of carcinogenicity
Carcinogens can be classified as genotoxic or nongenotoxic. Genotoxins cause irreversible genetic damage or mutations by binding to DNA. Genotoxins include chemical agents like N-nitroso-N-methylurea (NMU) or non-chemical agents such as ultraviolet light and ionizing radiation. Certain viruses can also act as carcinogens by interacting with DNA.
Nongenotoxins do not directly affect DNA but act in other ways to promote growth. These include hormones and some organic compounds.
Classification
International Agency for Research on Cancer
The International Agency for Research on Cancer (IARC) is an intergovernmental agency established in 1965, which forms part of the World Health Organization of the United Nations. It is based in Lyon, France. Since 1971 it has published a series of Monographs on the Evaluation of Carcinogenic Risks to Humans that have been highly influential in the classification of possible carcinogens.
Group 1: the agent (mixture) is carcinogenic to humans. The exposure circumstance entails exposures that are carcinogenic to humans.
Group 2A: the agent (mixture) is most likely (product more likely to be) carcinogenic to humans. The exposure circumstance entails exposures that are probably carcinogenic to humans.
Group 2B: the agent (mixture) is possibly (chance of product being) carcinogenic to humans. The exposure circumstance entails exposures that are possibly carcinogenic to humans.
Group 3: the agent (mixture or exposure circumstance) is not classifiable as to its carcinogenicity to humans.
Group 4: the agent (mixture) is most likely not carcinogenic to humans.
Globally Harmonized System
The Globally Harmonized System of Classification and Labelling of Chemicals (GHS) is a United Nations initiative to attempt to harmonize the different systems of assessing chemical risk which currently exist (as of March 2009) around the world. It classifies carcinogens into two categories, of which the first may be divided again into subcategories if so desired by the competent regulatory authority:
Category 1: known or presumed to have carcinogenic potential for humans
Category 1A: the assessment is based primarily on human evidence
Category 1B: the assessment is based primarily on animal evidence
Category 2: suspected human carcinogens
U.S. National Toxicology Program
The National Toxicology Program of the U.S. Department of Health and Human Services is mandated to produce a biennial Report on Carcinogens. As of August 2024, the latest edition was the 15th report (2021). It classifies carcinogens into two groups:
Known to be a human carcinogen
Reasonably anticipated to be a human carcinogen
American Conference of Governmental Industrial Hygienists
The American Conference of Governmental Industrial Hygienists (ACGIH) is a private organization best known for its publication of threshold limit values (TLVs) for occupational exposure and monographs on workplace chemical hazards. It assesses carcinogenicity as part of a wider assessment of the occupational hazards of chemicals.
Group A1: Confirmed human carcinogen
Group A2: Suspected human carcinogen
Group A3: Confirmed animal carcinogen with unknown relevance to humans
Group A4: Not classifiable as a human carcinogen
Group A5: Not suspected as a human carcinogen
European Union
The European Union classification of carcinogens is contained in the Regulation (EC) No 1272/2008. It consists of three categories:
Category 1A: Carcinogenic
Category 1B: May cause cancer
Category 2: Suspected of causing cancer
The former European Union classification of carcinogens was contained in the Dangerous Substances Directive and the Dangerous Preparations Directive. It also consisted of three categories:
Category 1: Substances known to be carcinogenic to humans.
Category 2: Substances which should be regarded as if they are carcinogenic to humans.
Category 3: Substances which cause concern for humans, owing to possible carcinogenic effects but in respect of which the available information is not adequate for making a satisfactory assessment.
This assessment scheme is being phased out in favor of the GHS scheme (see above), to which it is very close in category definitions.
Safe Work Australia
Under a previous name, the NOHSC, in 1999 Safe Work Australia published the Approved Criteria for Classifying Hazardous Substances [NOHSC:1008(1999)].
Section 4.76 of this document outlines the criteria for classifying carcinogens as approved by the Australian government. This classification consists of three categories:
Category 1: Substances known to be carcinogenic to humans.
Category 2: Substances that should be regarded as if they were carcinogenic to humans.
Category 3: Substances that have possible carcinogenic effects in humans but about which there is insufficient information to make an assessment.
Major carcinogens implicated in the four most common cancers worldwide
In this section, the carcinogens implicated as the main causative agents of the four most common cancers worldwide are briefly described. These four cancers are lung, breast, colon, and stomach cancers. Together they account for about 41% of worldwide cancer incidence and 42% of cancer deaths (for more detailed information on the carcinogens implicated in these and other cancers, see references).
Lung cancer
Lung cancer (pulmonary carcinoma) is the most common cancer in the world, both in terms of cases (1.6 million cases; 12.7% of total cancer cases) and deaths (1.4 million deaths; 18.2% of total cancer deaths). Lung cancer is largely caused by tobacco smoke. Risk estimates for lung cancer in the United States indicate that tobacco smoke is responsible for 90% of lung cancers. Other factors are implicated in lung cancer, and these factors can interact synergistically with smoking so that total attributable risk adds up to more than 100%. These factors include occupational exposure to carcinogens (about 9-15%), radon (10%) and outdoor air pollution (1-2%).
Tobacco smoke is a complex mixture of more than 5,300 identified chemicals. The most important carcinogens in tobacco smoke have been determined by a "Margin of Exposure" approach. Using this approach, the most important tumorigenic compounds in tobacco smoke were, in order of importance, acrolein, formaldehyde, acrylonitrile, 1,3-butadiene, cadmium, acetaldehyde, ethylene oxide, and isoprene. Most of these compounds cause DNA damage by forming DNA adducts or by inducing other alterations in DNA. DNA damages are subject to error-prone DNA repair or can cause replication errors. Such errors in repair or replication can result in mutations in tumor suppressor genes or oncogenes leading to cancer.
Breast cancer
Breast cancer is the second most common cancer [(1.4 million cases, 10.9%), but ranks 5th as cause of death (458,000, 6.1%)]. Increased risk of breast cancer is associated with persistently elevated blood levels of estrogen. Estrogen appears to contribute to breast carcinogenesis by three processes; (1) the metabolism of estrogen to genotoxic, mutagenic carcinogens, (2) the stimulation of tissue growth, and (3) the repression of phase II detoxification enzymes that metabolize ROS leading to increased oxidative DNA damage.
The major estrogen in humans, estradiol, can be metabolized to quinone derivatives that form adducts with DNA. These derivatives can cause depurination, the removal of bases from the phosphodiester backbone of DNA, followed by inaccurate repair or replication of the apurinic site leading to mutation and eventually cancer. This genotoxic mechanism may interact in synergy with estrogen receptor-mediated, persistent cell proliferation to ultimately cause breast cancer. Genetic background, dietary practices and environmental factors also likely contribute to the incidence of DNA damage and breast cancer risk.
Consumption of alcohol has also been linked to an increased risk for breast cancer.
Colon cancer
Colorectal cancer is the third most common cancer [1.2 million cases (9.4%), 608,000 deaths (8.0%)]. Tobacco smoke may be responsible for up to 20% of colorectal cancers in the United States. In addition, substantial evidence implicates bile acids as an important factor in colon cancer. Twelve studies (summarized in Bernstein et al.) indicate that the bile acids deoxycholic acid (DCA) or lithocholic acid (LCA) induce production of DNA-damaging reactive oxygen species or reactive nitrogen species in human or animal colon cells. Furthermore, 14 studies showed that DCA and LCA induce DNA damage in colon cells. Also 27 studies reported that bile acids cause programmed cell death (apoptosis).
Increased apoptosis can result in selective survival of cells that are resistant to induction of apoptosis. Colon cells with reduced ability to undergo apoptosis in response to DNA damage would tend to accumulate mutations, and such cells may give rise to colon cancer. Epidemiologic studies have found that fecal bile acid concentrations are increased in populations with a high incidence of colon cancer. Dietary increases in total fat or saturated fat result in elevated DCA and LCA in feces and elevated exposure of the colon epithelium to these bile acids. When the bile acid DCA was added to the standard diet of wild-type mice invasive colon cancer was induced in 56% of the mice after 8 to 10 months. Overall, the available evidence indicates that DCA and LCA are centrally important DNA-damaging carcinogens in colon cancer.
Stomach cancer
Stomach cancer is the fourth most common cancer [990,000 cases (7.8%), 738,000 deaths (9.7%)]. Helicobacter pylori infection is the main causative factor in stomach cancer. Chronic gastritis (inflammation) caused by H. pylori is often long-standing if not treated. Infection of gastric epithelial cells with H. pylori results in increased production of reactive oxygen species (ROS). ROS cause oxidative DNA damage including the major base alteration 8-hydroxydeoxyguanosine (8-OHdG). 8-OHdG resulting from ROS is increased in chronic gastritis. The altered DNA base can cause errors during DNA replication that have mutagenic and carcinogenic potential. Thus H. pylori-induced ROS appear to be the major carcinogens in stomach cancer because they cause oxidative DNA damage leading to carcinogenic mutations.
Diet is also thought to be a contributing factor in stomach cancer: in Japan, where very salty pickled foods are popular, the incidence of stomach cancer is high. Preserved meat such as bacon, sausages, and ham increases the risk, while a diet rich in fresh fruit, vegetables, peas, beans, grains, nuts, seeds, herbs, and spices will reduce the risk. The risk also increases with age.
See also
References
External links
;
Radiation health effects
Occupational safety and health | Carcinogen | [
"Chemistry",
"Materials_science",
"Environmental_science"
] | 4,250 | [
"Radiation health effects",
"Toxicology",
"Radiation effects",
"Carcinogens",
"Radioactivity"
] |
6,446 | https://en.wikipedia.org/wiki/Camouflage | Camouflage is the use of any combination of materials, coloration, or illumination for concealment, either by making animals or objects hard to see, or by disguising them as something else. Examples include the leopard's spotted coat, the battledress of a modern soldier, and the leaf-mimic katydid's wings. A third approach, motion dazzle, confuses the observer with a conspicuous pattern, making the object visible but momentarily harder to locate. The majority of camouflage methods aim for crypsis, often through a general resemblance to the background, high contrast disruptive coloration, eliminating shadow, and countershading. In the open ocean, where there is no background, the principal methods of camouflage are transparencying, silveringing, and countershading, while the ability to produce light is among other things used for counter-illumination on the undersides of cephalopods such as squid. Some animals, such as chameleons and octopuses, are capable of actively changing their skin pattern and colors, whether for camouflage or for signalling. It is possible that some plants use camouflage to evade being eaten by herbivores.
Military camouflage was spurred by the increasing range and accuracy of firearms in the 19th century. In particular the replacement of the inaccurate musket with the rifle made personal concealment in battle a survival skill. In the 20th century, military camouflage developed rapidly, especially during the World War I. On land, artists such as André Mare designed camouflage schemes and observation posts disguised as trees. At sea, merchant ships and troop carriers were painted in dazzle patterns that were highly visible, but designed to confuse enemy submarines as to the target's speed, range, and heading. During and after World War II, a variety of camouflage schemes were used for aircraft and for ground vehicles in different theatres of war. The use of radar since the mid-20th century has largely made camouflage for fixed-wing military aircraft obsolete.
Non-military use of camouflage includes making cell telephone towers less obtrusive and helping hunters to approach wary game animals. Patterns derived from military camouflage are frequently used in fashion clothing, exploiting their strong designs and sometimes their symbolism. Camouflage themes recur in modern art, and both figuratively and literally in science fiction and works of literature.
History
In ancient Greece, Aristotle (384–322 BC) commented on the colour-changing abilities, both for camouflage and for signalling, of cephalopods including the octopus, in his Historia animalium:
Camouflage has been a topic of interest and research in zoology for well over a century. According to Charles Darwin's 1859 theory of natural selection, features such as camouflage evolved by providing individual animals with a reproductive advantage, enabling them to leave more offspring, on average, than other members of the same species. In his Origin of Species, Darwin wrote:
The English zoologist Edward Bagnall Poulton studied animal coloration, especially camouflage. In his 1890 book The Colours of Animals, he classified different types such as "special protective resemblance" (where an animal looks like another object), or "general aggressive resemblance" (where a predator blends in with the background, enabling it to approach prey). His experiments showed that swallow-tailed moth pupae were camouflaged to match the backgrounds on which they were reared as larvae. Poulton's "general protective resemblance" was at that time considered to be the main method of camouflage, as when Frank Evers Beddard wrote in 1892 that "tree-frequenting animals are often green in colour. Among vertebrates numerous species of parrots, iguanas, tree-frogs, and the green tree-snake are examples". Beddard did however briefly mention other methods, including the "alluring coloration" of the flower mantis and the possibility of a different mechanism in the orange tip butterfly. He wrote that "the scattered green spots upon the under surface of the wings might have been intended for a rough sketch of the small flowerets of the plant [an umbellifer], so close is their mutual resemblance." He also explained the coloration of sea fish such as the mackerel: "Among pelagic fish it is common to find the upper surface dark-coloured and the lower surface white, so that the animal is inconspicuous when seen either from above or below."
The artist Abbott Handerson Thayer formulated what is sometimes called Thayer's Law, the principle of countershading. However, he overstated the case in the 1909 book Concealing-Coloration in the Animal Kingdom, arguing that "All patterns and colors whatsoever of all animals that ever preyed or are preyed on are under certain normal circumstances obliterative" (that is, cryptic camouflage), and that "Not one 'mimicry' mark, not one 'warning color'... nor any 'sexually selected' color, exists anywhere in the world where there is not every reason to believe it the very best conceivable device for the concealment of its wearer", and using paintings such as Peacock in the Woods (1907) to reinforce his argument. Thayer was roundly mocked for these views by critics including Teddy Roosevelt.
The English zoologist Hugh Cott's 1940 book Adaptive Coloration in Animals corrected Thayer's errors, sometimes sharply: "Thus we find Thayer straining the theory to a fantastic extreme in an endeavour to make it cover almost every type of coloration in the animal kingdom." Cott built on Thayer's discoveries, developing a comprehensive view of camouflage based on "maximum disruptive contrast", countershading and hundreds of examples. The book explained how disruptive camouflage worked, using streaks of boldly contrasting colour, paradoxically making objects less visible by breaking up their outlines. While Cott was more systematic and balanced in his view than Thayer, and did include some experimental evidence on the effectiveness of camouflage, his 500-page textbook was, like Thayer's, mainly a natural history narrative which illustrated theories with examples.
Experimental evidence that camouflage helps prey avoid being detected by predators was first provided in 2016, when ground-nesting birds (plovers and coursers) were shown to survive according to how well their egg contrast matched the local environment.
Evolution
As there is a lack of evidence for camouflage in the fossil record, studying the evolution of camouflage strategies is very difficult. Furthermore, camouflage traits must be both adaptable (provide a fitness gain in a given environment) and heritable (in other words, the trait must undergo positive selection). Thus, studying the evolution of camouflage strategies requires an understanding of the genetic components and various ecological pressures that drive crypsis.
Fossil history
Camouflage is a soft-tissue feature that is rarely preserved in the fossil record, but rare fossilised skin samples from the Cretaceous period show that some marine reptiles were countershaded. The skins, pigmented with dark-coloured eumelanin, reveal that both leatherback turtles and mosasaurs had dark backs and light bellies. There is fossil evidence of camouflaged insects going back over 100 million years, for example lacewings larvae that stick debris all over their bodies much as their modern descendants do, hiding them from their prey. Dinosaurs appear to have been camouflaged, as a 120 million year old fossil of a Psittacosaurus has been preserved with countershading.
Genetics
Camouflage does not have a single genetic origin. However, studying the genetic components of camouflage in specific organisms illuminates the various ways that crypsis can evolve among lineages.
Many cephalopods have the ability to actively camouflage themselves, controlling crypsis through neural activity. For example, the genome of the common cuttlefish includes 16 copies of the reflectin gene, which grants the organism remarkable control over coloration and iridescence. The reflectin gene is thought to have originated through transposition from symbiotic Aliivibrio fischeri bacteria, which provide bioluminescence to its hosts. While not all cephalopods use active camouflage, ancient cephalopods may have inherited the gene horizontally from symbiotic A. fischeri, with divergence occurred through subsequent gene duplication (such as in the case of Sepia officinalis) or gene loss (as with cephalopods with no active camouflage capabilities).[3] This is unique as an instance of camouflage arising as an instance of horizontal gene transfer from an endosymbiont. However, other methods of horizontal gene transfer are common in the evolution of camouflage strategies in other lineages. Peppered moths and walking stick insects both have camouflage-related genes that stem from transposition events.
The Agouti genes are orthologous genes involved in camouflage across many lineages. They produce yellow and red coloration (phaeomelanin), and work in competition with other genes that produce black (melanin) and brown (eumelanin) colours. In eastern deer mice, over a period of about 8000 years the single agouti gene developed 9 mutations that each made expression of yellow fur stronger under natural selection, and largely eliminated melanin-coding black fur coloration. On the other hand, all black domesticated cats have deletions of the agouti gene that prevent its expression, meaning no yellow or red color is produced. The evolution, history and widespread scope of the agouti gene shows that different organisms often rely on orthologous or even identical genes to develop a variety of camouflage strategies.
Ecology
While camouflage can increase an organism's fitness, it has genetic and energetic costs. There is a trade-off between detectability and mobility. Species camouflaged to fit a specific microhabitat are less likely to be detected when in that microhabitat, but must spend energy to reach, and sometimes to remain in, such areas. Outside the microhabitat, the organism has a higher chance of detection. Generalized camouflage allows species to avoid predation over a wide range of habitat backgrounds, but is less effective. The development of generalized or specialized camouflage strategies is highly dependent on the biotic and abiotic composition of the surrounding environment.
There are many examples of the tradeoffs between specific and general cryptic patterning. Phestilla melanocrachia, a species of nudibranch that feeds on stony coral, utilizes specific cryptic patterning in reef ecosystems. The nudibranch syphons pigments from the consumed coral into the epidermis, adopting the same shade as the consumed coral. This allows the nudibranch to change colour (mostly between black and orange) depending on the coral system that it inhabits. However, P. melanocrachia can only feed and lay eggs on the branches of host-coral, Platygyra carnosa, which limits the geographical range and efficacy in nudibranch nutritional crypsis. Furthermore, the nudibranch colour change is not immediate, and switching between coral hosts when in search for new food or shelter can be costly.
The costs associated with distractive or disruptive crypsis are more complex than the costs associated with background matching. Disruptive patterns distort the body outline, making it harder to precisely identify and locate. However, disruptive patterns result in higher predation. Disruptive patterns that specifically involve visible symmetry (such as in some butterflies) reduce survivability and increase predation. Some researchers argue that because wing-shape and color pattern are genetically linked, it is genetically costly to develop asymmetric wing colorations that would enhance the efficacy of disruptive cryptic patterning. Symmetry does not carry a high survival cost for butterflies and moths that their predators views from above on a homogeneous background, such as the bark of a tree. On the other hand, natural selection drives species with variable backgrounds and habitats to move symmetrical patterns away from the centre of the wing and body, disrupting their predators' symmetry recognition.
Principles
Camouflage can be achieved by different methods, described below. Most of the methods help to hide against a background; but mimesis and motion dazzle protect without hiding. Methods may be applied on their own or in combination. Many mechanisms are visual, but some research has explored the use of techniques against olfactory (scent) and acoustic (sound) detection. Methods may also apply to military equipment.
Background matching
Some animals' colours and patterns match a particular natural background. This is an important component of camouflage in all environments. For instance, tree-dwelling parakeets are mainly green; woodcocks of the forest floor are brown and speckled; reedbed bitterns are streaked brown and buff; in each case the animal's coloration matches the hues of its habitat. Similarly, desert animals are almost all desert coloured in tones of sand, buff, ochre, and brownish grey, whether they are mammals like the gerbil or fennec fox, birds such as the desert lark or sandgrouse, or reptiles like the skink or horned viper. Military uniforms, too, generally resemble their backgrounds; for example khaki uniforms are a muddy or dusty colour, originally chosen for service in South Asia. Many moths show industrial melanism, including the peppered moth which has coloration that blends in with tree bark. The coloration of these insects evolved between 1860 and 1940 to match the changing colour of the tree trunks on which they rest, from pale and mottled to almost black in polluted areas. This is taken by zoologists as evidence that camouflage is influenced by natural selection, as well as demonstrating that it changes where necessary to resemble the local background.
Disruptive coloration
Disruptive patterns use strongly contrasting, non-repeating markings such as spots or stripes to break up the outlines of an animal or military vehicle, or to conceal telltale features, especially by masking the eyes, as in the common frog. Disruptive patterns may use more than one method to defeat visual systems such as edge detection. Predators like the leopard use disruptive camouflage to help them approach prey, while potential prey use it to avoid detection by predators. Disruptive patterning is common in military usage, both for uniforms and for military vehicles. Disruptive patterning, however, does not always achieve crypsis on its own, as an animal or a military target may be given away by factors like shape, shine, and shadow.
The presence of bold skin markings does not in itself prove that an animal relies on camouflage, as that depends on its behaviour. For example, although giraffes have a high contrast pattern that could be disruptive coloration, the adults are very conspicuous when in the open. Some authors have argued that adult giraffes are cryptic, since when standing among trees and bushes they are hard to see at even a few metres' distance. However, adult giraffes move about to gain the best view of an approaching predator, relying on their size and ability to defend themselves, even from lions, rather than on camouflage. A different explanation is implied by young giraffes being far more vulnerable to predation than adults. More than half of all giraffe calves die within a year, and giraffe mothers hide their newly born calves, which spend much of the time lying down in cover while their mothers are away feeding. The mothers return once a day to feed their calves with milk. Since the presence of a mother nearby does not affect survival, it is argued that these juvenile giraffes must be very well camouflaged; this is supported by coat markings being strongly inherited.
The possibility of camouflage in plants was little studied until the late 20th century. Leaf variegation with white spots may serve as camouflage in forest understory plants, where there is a dappled background; leaf mottling is correlated with closed habitats. Disruptive camouflage would have a clear evolutionary advantage in plants: they would tend to escape from being eaten by herbivores. Another possibility is that some plants have leaves differently coloured on upper and lower surfaces or on parts such as veins and stalks to make green-camouflaged insects conspicuous, and thus benefit the plants by favouring the removal of herbivores by carnivores. These hypotheses are testable.
Eliminating shadow
Some animals, such as the horned lizards of North America, have evolved elaborate measures to eliminate shadow. Their bodies are flattened, with the sides thinning to an edge; the animals habitually press their bodies to the ground; and their sides are fringed with white scales which effectively hide and disrupt any remaining areas of shadow there may be under the edge of the body. The theory that the body shape of the horned lizards which live in open desert is adapted to minimise shadow is supported by the one species which lacks fringe scales, the roundtail horned lizard, which lives in rocky areas and resembles a rock. When this species is threatened, it makes itself look as much like a rock as possible by curving its back, emphasizing its three-dimensional shape. Some species of butterflies, such as the speckled wood, Pararge aegeria, minimise their shadows when perched by closing the wings over their backs, aligning their bodies with the sun, and tilting to one side towards the sun, so that the shadow becomes a thin inconspicuous line rather than a broad patch. Similarly, some ground-nesting birds, including the European nightjar, select a resting position facing the sun. Eliminating shadow was identified as a principle of military camouflage during the Second World War.
Distraction
Many prey animals have conspicuous high-contrast markings which paradoxically attract the predator's gaze. These distractive markings may serve as camouflage by distracting the predator's attention from recognising the prey as a whole, for example by keeping the predator from identifying the prey's outline. Experimentally, search times for blue tits increased when artificial prey had distractive markings.
Self-decoration
Some animals actively seek to hide by decorating themselves with materials such as twigs, sand, or pieces of shell from their environment, to break up their outlines, to conceal the features of their bodies, and to match their backgrounds. For example, a caddisfly larva builds a decorated case and lives almost entirely inside it; a decorator crab covers its back with seaweed, sponges, and stones. The nymph of the predatory masked bug uses its hind legs and a 'tarsal fan' to decorate its body with sand or dust. There are two layers of bristles (trichomes) over the body. On these, the nymph spreads an inner layer of fine particles and an outer layer of coarser particles. The camouflage may conceal the bug from both predators and prey.
Similar principles can be applied for military purposes, for instance when a sniper wears a ghillie suit designed to be further camouflaged by decoration with materials such as tufts of grass from the sniper's immediate environment. Such suits were used as early as 1916, the British army having adopted "coats of motley hue and stripes of paint" for snipers. Cott takes the example of the larva of the blotched emerald moth, which fixes a screen of fragments of leaves to its specially hooked bristles, to argue that military camouflage uses the same method, pointing out that the "device is ... essentially the same as one widely practised during the Great War for the concealment, not of caterpillars, but of caterpillar-tractors, [gun] battery positions, observation posts and so forth."
Cryptic behaviour
Movement catches the eye of prey animals on the lookout for predators, and of predators hunting for prey. Most methods of crypsis therefore also require suitable cryptic behaviour, such as lying down and keeping still to avoid being detected, or in the case of stalking predators such as the tiger, moving with extreme stealth, both slowly and quietly, watching its prey for any sign they are aware of its presence. As an example of the combination of behaviours and other methods of crypsis involved, young giraffes seek cover, lie down, and keep still, often for hours until their mothers return; their skin pattern blends with the pattern of the vegetation, while the chosen cover and lying position together hide the animals' shadows. The flat-tail horned lizard similarly relies on a combination of methods: it is adapted to lie flat in the open desert, relying on stillness, its cryptic coloration, and concealment of its shadow to avoid being noticed by predators. In the ocean, the leafy sea dragon sways mimetically, like the seaweeds amongst which it rests, as if rippled by wind or water currents. Swaying is seen also in some insects, like Macleay's spectre stick insect, Extatosoma tiaratum. The behaviour may be motion crypsis, preventing detection, or motion masquerade, promoting misclassification (as something other than prey), or a combination of the two.
Motion camouflage
Most forms of camouflage are ineffective when the camouflaged animal or object moves, because the motion is easily seen by the observing predator, prey or enemy. However, insects such as hoverflies and dragonflies use motion camouflage: the hoverflies to approach possible mates, and the dragonflies to approach rivals when defending territories. Motion camouflage is achieved by moving so as to stay on a straight line between the target and a fixed point in the landscape; the pursuer thus appears not to move, but only to loom larger in the target's field of vision.
Some insects sway while moving to appear to be blown back and forth by the breeze.
The same method can be used for military purposes, for example by missiles to minimise their risk of detection by an enemy. However, missile engineers, and animals such as bats, use the method mainly for its efficiency rather than camouflage.
Changeable skin coloration
Animals such as chameleon, frog, flatfish such as the peacock flounder, squid, octopus and even the isopod idotea balthica actively change their skin patterns and colours using special chromatophore cells to resemble their current background, or, as in most chameleons, for signalling. However, Smith's dwarf chameleon does use active colour change for camouflage.
Each chromatophore contains pigment of only one colour. In fish and frogs, colour change is mediated by a type of chromatophore known as melanophores that contain dark pigment. A melanophore is star-shaped; it contains many small pigmented organelles which can be dispersed throughout the cell, or aggregated near its centre. When the pigmented organelles are dispersed, the cell makes a patch of the animal's skin appear dark; when they are aggregated, most of the cell, and the animal's skin, appears light. In frogs, the change is controlled relatively slowly, mainly by hormones. In fish, the change is controlled by the brain, which sends signals directly to the chromatophores, as well as producing hormones.
The skins of cephalopods such as the octopus contain complex units, each consisting of a chromatophore with surrounding muscle and nerve cells. The cephalopod chromatophore has all its pigment grains in a small elastic sac, which can be stretched or allowed to relax under the control of the brain to vary its opacity. By controlling chromatophores of different colours, cephalopods can rapidly change their skin patterns and colours.
On a longer timescale, animals like the Arctic hare, Arctic fox, stoat, and rock ptarmigan have snow camouflage, changing their coat colour (by moulting and growing new fur or feathers) from brown or grey in the summer to white in the winter; the Arctic fox is the only species in the dog family to do so. However, Arctic hares which live in the far north of Canada, where summer is very short, remain white year-round.
The principle of varying coloration either rapidly or with the changing seasons has military applications. Active camouflage could in theory make use of both dynamic colour change and counterillumination. Simple methods such as changing uniforms and repainting vehicles for winter have been in use since World War II. In 2011, BAE Systems announced their Adaptiv infrared camouflage technology. It uses about 1,000 hexagonal panels to cover the sides of a tank. The Peltier plate panels are heated and cooled to match either the vehicle's surroundings (crypsis), or an object such as a car (mimesis), when viewed in infrared.
Countershading
Countershading uses graded colour to counteract the effect of self-shadowing, creating an illusion of flatness. Self-shadowing makes an animal appear darker below than on top, grading from light to dark; countershading 'paints in' tones which are darkest on top, lightest below, making the countershaded animal nearly invisible against a suitable background. Thayer observed that "Animals are painted by Nature, darkest on those parts which tend to be most lighted by the sky's light, and vice versa". Accordingly, the principle of countershading is sometimes called Thayer's Law. Countershading is widely used by terrestrial animals, such as gazelles and grasshoppers; marine animals, such as sharks and dolphins; and birds, such as snipe and dunlin.
Countershading is less often used for military camouflage, despite Second World War experiments that showed its effectiveness. English zoologist Hugh Cott encouraged the use of methods including countershading, but despite his authority on the subject, failed to persuade the British authorities. Soldiers often wrongly viewed camouflage netting as a kind of invisibility cloak, and they had to be taught to look at camouflage practically, from an enemy observer's viewpoint. At the same time in Australia, zoologist William John Dakin advised soldiers to copy animals' methods, using their instincts for wartime camouflage.
The term countershading has a second meaning unrelated to "Thayer's Law". It is that the upper and undersides of animals such as sharks, and of some military aircraft, are different colours to match the different backgrounds when seen from above or from below. Here the camouflage consists of two surfaces, each with the simple function of providing concealment against a specific background, such as a bright water surface or the sky. The body of a shark or the fuselage of an aircraft is not gradated from light to dark to appear flat when seen from the side. The camouflage methods used are the matching of background colour and pattern, and disruption of outlines.
Counter-illumination
Counter-illumination means producing light to match a background that is brighter than an animal's body or military vehicle; it is a form of active camouflage. It is notably used by some species of squid, such as the firefly squid and the midwater squid. The latter has light-producing organs (photophores) scattered all over its underside; these create a sparkling glow that prevents the animal from appearing as a dark shape when seen from below. Counterillumination camouflage is the likely function of the bioluminescence of many marine organisms, though light is also produced to attract or to detect prey and for signalling.
Counterillumination has rarely been used for military purposes. "Diffused lighting camouflage" was trialled by Canada's National Research Council during the Second World War. It involved projecting light on to the sides of ships to match the faint glow of the night sky, requiring awkward external platforms to support the lamps. The Canadian concept was refined in the American Yehudi lights project, and trialled in aircraft including B-24 Liberators and naval Avengers. The planes were fitted with forward-pointing lamps automatically adjusted to match the brightness of the night sky. This enabled them to approach much closer to a target – within – before being seen. Counterillumination was made obsolete by radar, and neither diffused lighting camouflage nor Yehudi lights entered active service.
Transparency
Many marine animals that float near the surface are highly transparent, giving them almost perfect camouflage. However, transparency is difficult for bodies made of materials that have different refractive indices from seawater. Some marine animals such as jellyfish have gelatinous bodies, composed mainly of water; their thick mesogloea is acellular and highly transparent. This conveniently makes them buoyant, but it also makes them large for their muscle mass, so they cannot swim fast, making this form of camouflage a costly trade-off with mobility. Gelatinous planktonic animals are between 50 and 90 percent transparent. A transparency of 50 percent is enough to make an animal invisible to a predator such as cod at a depth of ; better transparency is required for invisibility in shallower water, where the light is brighter and predators can see better. For example, a cod can see prey that are 98 percent transparent in optimal lighting in shallow water. Therefore, sufficient transparency for camouflage is more easily achieved in deeper waters.
Some tissues such as muscles can be made transparent, provided either they are very thin or organised as regular layers or fibrils that are small compared to the wavelength of visible light. A familiar example is the transparency of the lens of the vertebrate eye, which is made of the protein crystallin, and the vertebrate cornea which is made of the protein collagen. Other structures cannot be made transparent, notably the retinas or equivalent light-absorbing structures of eyes – they must absorb light to be able to function. The camera-type eye of vertebrates and cephalopods must be completely opaque. Finally, some structures are visible for a reason, such as to lure prey. For example, the nematocysts (stinging cells) of the transparent siphonophore Agalma okenii resemble small copepods. Examples of transparent marine animals include a wide variety of larvae, including radiata (coelenterates), siphonophores, salps (floating tunicates), gastropod molluscs, polychaete worms, many shrimplike crustaceans, and fish; whereas the adults of most of these are opaque and pigmented, resembling the seabed or shores where they live. Adult comb jellies and jellyfish obey the rule, often being mainly transparent. Cott suggests this follows the more general rule that animals resemble their background: in a transparent medium like seawater, that means being transparent. The small Amazon River fish Microphilypnus amazonicus and the shrimps it associates with, Pseudopalaemon gouldingi, are so transparent as to be "almost invisible"; further, these species appear to select whether to be transparent or more conventionally mottled (disruptively patterned) according to the local background in the environment.
Silvering
Where transparency cannot be achieved, it can be imitated effectively by silvering to make an animal's body highly reflective. At medium depths at sea, light comes from above, so a mirror oriented vertically makes animals such as fish invisible from the side. Most fish in the upper ocean such as sardine and herring are camouflaged by silvering.
The marine hatchetfish is extremely flattened laterally, leaving the body just millimetres thick, and the body is so silvery as to resemble aluminium foil. The mirrors consist of microscopic structures similar to those used to provide structural coloration: stacks of between 5 and 10 crystals of guanine spaced about of a wavelength apart to interfere constructively and achieve nearly 100 per cent reflection. In the deep waters that the hatchetfish lives in, only blue light with a wavelength of 500 nanometres percolates down and needs to be reflected, so mirrors 125 nanometres apart provide good camouflage.
In fish such as the herring which live in shallower water, the mirrors must reflect a mixture of wavelengths, and the fish accordingly has crystal stacks with a range of different spacings. A further complication for fish with bodies that are rounded in cross-section is that the mirrors would be ineffective if laid flat on the skin, as they would fail to reflect horizontally. The overall mirror effect is achieved with many small reflectors, all oriented vertically. Silvering is found in other marine animals as well as fish. The cephalopods, including squid, octopus and cuttlefish, have multilayer mirrors made of protein rather than guanine.
Ultra-blackness
Some deep sea fishes have very black skin, reflecting under 0.5% of ambient light. This can prevent detection by predators or prey fish which use bioluminescence for illumination. Oneirodes had a particularly black skin which reflected only 0.044% of 480 nm wavelength light. The ultra-blackness is achieved with a thin but continuous layer of particles in the dermis, melanosomes. These particles both absorb most of the light, and are sized and shaped so as to scatter rather than reflect most of the rest. Modelling suggests that this camouflage should reduce the distance at which such a fish can be seen by a factor of 6 compared to a fish with a nominal 2% reflectance. Species with this adaptation are widely dispersed in various orders of the phylogenetic tree of bony fishes (Actinopterygii), implying that natural selection has driven the convergent evolution of ultra-blackness camouflage independently many times.
Mimesis
In mimesis (also called masquerade), the camouflaged object looks like something else which is of no special interest to the observer. Mimesis is common in prey animals, for example when a peppered moth caterpillar mimics a twig, or a grasshopper mimics a dry leaf. It is also found in nest structures; some eusocial wasps, such as Leipomeles dorsata, build a nest envelope in patterns that mimic the leaves surrounding the nest.
Mimesis is also employed by some predators and parasites to lure their prey. For example, a flower mantis mimics a particular kind of flower, such as an orchid. This tactic has occasionally been used in warfare, for example with heavily armed Q-ships disguised as merchant ships.
The common cuckoo, a brood parasite, provides examples of mimesis both in the adult and in the egg. The female lays her eggs in nests of other, smaller species of bird, one per nest. The female mimics a sparrowhawk. The resemblance is sufficient to make small birds take action to avoid the apparent predator. The female cuckoo then has time to lay her egg in their nest without being seen to do so. The cuckoo's egg itself mimics the eggs of the host species, reducing its chance of being rejected.
Motion dazzle
Most forms of camouflage are made ineffective by movement: a deer or grasshopper may be highly cryptic when motionless, but instantly seen when it moves. But one method, motion dazzle, requires rapidly moving bold patterns of contrasting stripes. Motion dazzle may degrade predators' ability to estimate the prey's speed and direction accurately, giving the prey an improved chance of escape. Motion dazzle distorts speed perception and is most effective at high speeds; stripes can also distort perception of size (and so, perceived range to the target). As of 2011, motion dazzle had been proposed for military vehicles, but never applied. Since motion dazzle patterns would make animals more difficult to locate accurately when moving, but easier to see when stationary, there would be an evolutionary trade-off between motion dazzle and crypsis.
An animal that is commonly thought to be dazzle-patterned is the zebra. The bold stripes of the zebra have been claimed to be disruptive camouflage, background-blending and countershading. After many years in which the purpose of the coloration was disputed, an experimental study by Tim Caro suggested in 2012 that the pattern reduces the attractiveness of stationary models to biting flies such as horseflies and tsetse flies. However, a simulation study by Martin How and Johannes Zanker in 2014 suggests that when moving, the stripes may confuse observers, such as mammalian predators and biting insects, by two visual illusions: the wagon-wheel effect, where the perceived motion is inverted, and the barberpole illusion, where the perceived motion is in a wrong direction.
Applications
Military
Before 1800
Ship camouflage was occasionally used in ancient times. Philostratus () wrote in his Imagines that Mediterranean pirate ships could be painted blue-gray for concealment. Vegetius () says that "Venetian blue" (sea green) was used in the Gallic Wars, when Julius Caesar sent his speculatoria navigia (reconnaissance boats) to gather intelligence along the coast of Britain; the ships were painted entirely in bluish-green wax, with sails, ropes and crew the same colour. There is little evidence of military use of camouflage on land before 1800, but two unusual ceramics show men in Peru's Mochica culture from before 500 AD, hunting birds with blowpipes which are fitted with a kind of shield near the mouth, perhaps to conceal the hunters' hands and faces. Another early source is a 15th-century French manuscript, The Hunting Book of Gaston Phebus, showing a horse pulling a cart which contains a hunter armed with a crossbow under a cover of branches, perhaps serving as a hide for shooting game. Jamaican Maroons are said to have used plant materials as camouflage in the First Maroon War ().
19th-century origins
The development of military camouflage was driven by the increasing range and accuracy of infantry firearms in the 19th century. In particular the replacement of the inaccurate musket with weapons such as the Baker rifle made personal concealment in battle essential. Two Napoleonic War skirmishing units of the British Army, the 95th Rifle Regiment and the 60th Rifle Regiment, were the first to adopt camouflage in the form of a rifle green jacket, while the Line regiments continued to wear scarlet tunics. A contemporary study in 1800 by the English artist and soldier Charles Hamilton Smith provided evidence that grey uniforms were less visible than green ones at a range of 150 yards.
In the American Civil War, rifle units such as the 1st United States Sharp Shooters (in the Federal army) similarly wore green jackets while other units wore more conspicuous colours. The first British Army unit to adopt khaki uniforms was the Corps of Guides at Peshawar, when Sir Harry Lumsden and his second in command, William Hodson introduced a "drab" uniform in 1848. Hodson wrote that it would be more appropriate for the hot climate, and help make his troops "invisible in a land of dust". Later they improvised by dyeing cloth locally. Other regiments in India soon adopted the khaki uniform, and by 1896 khaki drill uniform was used everywhere outside Europe; by the Second Boer War six years later it was used throughout the British Army.
During the late 19th century camouflage was applied to British coastal fortifications. The fortifications around Plymouth, England were painted in the late 1880s in "irregular patches of red, brown, yellow and green." From 1891 onwards British coastal artillery was permitted to be painted in suitable colours "to harmonise with the surroundings" and by 1904 it was standard practice that artillery and mountings should be painted with "large irregular patches of different colours selected to suit local conditions."
First World War
In the First World War, the French army formed a camouflage corps, led by Lucien-Victor Guirand de Scévola, employing artists known as camoufleurs to create schemes such as tree observation posts and covers for guns. Other armies soon followed them. The term camouflage probably comes from camoufler, a Parisian slang term meaning to disguise, and may have been influenced by camouflet, a French term meaning smoke blown in someone's face. The English zoologist John Graham Kerr, artist Solomon J. Solomon and the American artist Abbott Thayer led attempts to introduce scientific principles of countershading and disruptive patterning into military camouflage, with limited success. In early 1916 the Royal Naval Air Service began to create dummy air fields to draw the attention of enemy planes to empty land. They created decoy homes and lined fake runways with flares, which were meant to help protect real towns from night raids. This strategy was not common practice and did not succeed at first, but in 1918 it caught the Germans off guard multiple times.
Ship camouflage was introduced in the early 20th century as the range of naval guns increased, with ships painted grey all over. In April 1917, when German U-boats were sinking many British ships with torpedoes, the marine artist Norman Wilkinson devised dazzle camouflage, which paradoxically made ships more visible but harder to target. In Wilkinson's own words, dazzle was designed "not for low visibility, but in such a way as to break up her form and thus confuse a submarine officer as to the course on which she was heading".
Second World War
In the Second World War, the zoologist Hugh Cott, a protégé of Kerr, worked to persuade the British army to use more effective camouflage methods, including countershading, but, like Kerr and Thayer in the First World War, with limited success. For example, he painted two rail-mounted coastal guns, one in conventional style, one countershaded. In aerial photographs, the countershaded gun was essentially invisible. The power of aerial observation and attack led every warring nation to camouflage targets of all types. The Soviet Union's Red Army created the comprehensive doctrine of Maskirovka for military deception, including the use of camouflage. For example, during the Battle of Kursk, General Katukov, the commander of the Soviet 1st Tank Army, remarked that the enemy "did not suspect that our well-camouflaged tanks were waiting for him. As we later learned from prisoners, we had managed to move our tanks forward unnoticed". The tanks were concealed in previously prepared defensive emplacements, with only their turrets above ground level. In the air, Second World War fighters were often painted in ground colours above and sky colours below, attempting two different camouflage schemes for observers above and below. Bombers and night fighters were often black, while maritime reconnaissance planes were usually white, to avoid appearing as dark shapes against the sky. For ships, dazzle camouflage was mainly replaced with plain grey in the Second World War, though experimentation with colour schemes continued.
As in the First World War, artists were pressed into service; for example, the surrealist painter Roland Penrose became a lecturer at the newly founded Camouflage Development and Training Centre at Farnham Castle, writing the practical Home Guard Manual of Camouflage. The film-maker Geoffrey Barkas ran the Middle East Command Camouflage Directorate during the 1941–1942 war in the Western Desert, including the successful deception of Operation Bertram. Hugh Cott was chief instructor; the artist camouflage officers, who called themselves camoufleurs, included Steven Sykes and Tony Ayrton. In Australia, artists were also prominent in the Sydney Camouflage Group, formed under the chairmanship of Professor William John Dakin, a zoologist from Sydney University. Max Dupain, Sydney Ure Smith, and William Dobell were among the members of the group, which worked at Bankstown Airport, RAAF Base Richmond and Garden Island Dockyard. In the United States, artists like John Vassos took a certificate course in military and industrial camouflage at the American School of Design with Baron Nicholas Cerkasoff, and went on to create camouflage for the Air Force.
After 1945
Camouflage has been used to protect military equipment such as vehicles, guns, ships, aircraft and buildings as well as individual soldiers and their positions.
Vehicle camouflage methods begin with paint, which offers at best only limited effectiveness. Other methods for stationary land vehicles include covering with improvised materials such as blankets and vegetation, and erecting nets, screens and soft covers which may suitably reflect, scatter or absorb near infrared and radar waves. Some military textiles and vehicle camouflage paints also reflect infrared to help provide concealment from night vision devices.
After the Second World War, radar made camouflage generally less effective, though coastal boats are sometimes painted like land vehicles. Aircraft camouflage too came to be seen as less important because of radar, and aircraft of different air forces, such as the Royal Air Force's Lightning, were often uncamouflaged.
Many camouflaged textile patterns have been developed to suit the need to match combat clothing to different kinds of terrain (such as woodland, snow, and desert). The design of a pattern effective in all terrains has proved elusive. The American Universal Camouflage Pattern of 2004 attempted to suit all environments, but was withdrawn after a few years of service. Terrain-specific patterns have sometimes been developed but are ineffective in other terrains. The problem of making a pattern that works at different ranges has been solved with multiscale designs, often with a pixellated appearance and designed digitally, that provide a fractal-like range of patch sizes so they appear disruptively coloured both at close range and at a distance. The first genuinely digital camouflage pattern was the Canadian Disruptive Pattern (CADPAT), issued to the army in 2002, soon followed by the American Marine pattern (MARPAT). A pixellated appearance is not essential for this effect, though it is simpler to design and to print.
Hunting
Hunters of game have long made use of camouflage in the form of materials such as animal skins, mud, foliage, and green or brown clothing to enable them to approach wary game animals. Field sports such as driven grouse shooting conceal hunters in hides (also called blinds or shooting butts). Modern hunting clothing makes use of fabrics that provide a disruptive camouflage pattern; for example, in 1986 the hunter Bill Jordan created cryptic clothing for hunters, printed with images of specific kinds of vegetation such as grass and branches.
Civil structures
Camouflage is occasionally used to make built structures less conspicuous: for example, in South Africa, towers carrying cell telephone antennae are sometimes camouflaged as tall trees with plastic branches, in response to "resistance from the community". Since this method is costly (a figure of three times the normal cost is mentioned), alternative forms of camouflage can include using neutral colours or familiar shapes such as cylinders and flagpoles. Conspicuousness can also be reduced by siting masts near, or on, other structures.
Automotive manufacturers often use patterns to disguise upcoming products. This camouflage is designed to obfuscate the vehicle's visual lines, and is used along with padding, covers, and decals. The patterns' purpose is to prevent visual observation (and to a lesser degree photography), that would subsequently enable reproduction of the vehicle's form factors.
Fashion, art and society
Military camouflage patterns influenced fashion and art from the time of the First World War onwards. Gertrude Stein recalled the cubist artist Pablo Picasso's reaction in around 1915:
In 1919, the attendants of a "dazzle ball", hosted by the Chelsea Arts Club, wore dazzle-patterned black and white clothing. The ball influenced fashion and art via postcards and magazine articles. The Illustrated London News announced:
More recently, fashion designers have often used camouflage fabric for its striking designs, its "patterned disorder" and its symbolism. Camouflage clothing can be worn largely for its symbolic significance rather than for fashion, as when, during the late 1960s and early 1970s in the United States, anti-war protestors often ironically wore military clothing during demonstrations against the American involvement in the Vietnam War.
Modern artists such as Ian Hamilton Finlay have used camouflage to reflect on war. His 1973 screenprint of a tank camouflaged in a leaf pattern, Arcadia, is described by the Tate as drawing "an ironic parallel between this idea of a natural paradise and the camouflage patterns on a tank". The title refers to the Utopian Arcadia of poetry and art, and the memento mori Latin phrase Et in Arcadia ego which recurs in Hamilton Finlay's work. In science fiction, Camouflage is a novel about shapeshifting alien beings by Joe Haldeman. The word is used more figuratively in works of literature such as Thaisa Frank's collection of stories of love and loss, A Brief History of Camouflage.
In 1986, Andy Warhol began a series of monumental camouflage paintings, which helped to transform camouflage into a popular print pattern. A year later, in 1987, New York designer Stephen Sprouse used Warhol's camouflage prints as the basis for his Autumn Winter 1987 collection.
Notes
References
Bibliography
Camouflage in nature
Early research
Reprinted 1985, Penguin Classics.
General reading
Elias, Ann (2015). Camouflage Cultures: Beyond the Art of Disappearance. Sydney University Press. .
Military camouflage
Further reading
Behrens, Roy R. (2002). False Colors: Art, Design and Modern Camouflage. Bobolink Books. .
Behrens, Roy R. (2009). Camoupedia: A Compendium of Research on Art, Architecture and Camouflage. Bobolink Books. .
Behrens, Roy R. (editor) (2012). Ship Shape: A Dazzle Camouflage Sourcebook. Bobolink Books. .
Goodden, Henrietta (2009). Camouflage and Art: Design for Deception in World War 2. Unicorn Press. .
Latimer, Jon (2001). Deception in War. John Murray. .
Newman, Alex; Blechman, Hardy (2004). DPM – Disruptive Pattern Material: An Encyclopaedia of Camouflage: Nature, Military and Culture. DPM. .
Shell, Hanna Rose (2012). Hide and Seek: Camouflage, Photography and the Media of Reconnaissance. Zone Books. .
Stevens, Martin; Merilaita, Sami (2011). Animal Camouflage: Mechanisms and Function. Cambridge University Press. .
Wickler, Wolfgang (1968). Mimicry in plants and animals. McGraw-Hill. .
For children
Kalman, Bobbie; Crossingham, John (2001). What are Camouflage and Mimicry?. Crabtree Publishing. . (ages 4–8)
Mettler, Rene (2001). Animal Camouflage. First Discovery series. Moonlight Publishing. . (ages 4–8)
External links
Ohio State University: The Camouflage Project – interplay of science and art
Behrens, Roy. A Chronology of Camouflage
MACV-SOG Improvised Camo Camouflage Effectiveness on YouTube
Survival skills
Deception
Hunting
Antipredator adaptations
Biological interactions
Evolutionary ecology | Camouflage | [
"Biology"
] | 10,255 | [
"Camouflage",
"Behavior",
"Biological interactions",
"Biological defense mechanisms",
"Antipredator adaptations",
"nan",
"Ethology"
] |
6,449 | https://en.wikipedia.org/wiki/Clock | A clock or chronometer is a device that measures and displays time. The clock is one of the oldest human inventions, meeting the need to measure intervals of time shorter than the natural units such as the day, the lunar month, and the year. Devices operating on several physical processes have been used over the millennia.
Some predecessors to the modern clock may be considered "clocks" that are based on movement in nature: A sundial shows the time by displaying the position of a shadow on a flat surface. There is a range of duration timers, a well-known example being the hourglass. Water clocks, along with sundials, are possibly the oldest time-measuring instruments. A major advance occurred with the invention of the verge escapement, which made possible the first mechanical clocks around 1300 in Europe, which kept time with oscillating timekeepers like balance wheels.
Traditionally, in horology (the study of timekeeping), the term clock was used for a striking clock, while a clock that did not strike the hours audibly was called a timepiece. This distinction is not generally made any longer. Watches and other timepieces that can be carried on one's person are usually not referred to as clocks. Spring-driven clocks appeared during the 15th century. During the 15th and 16th centuries, clockmaking flourished. The next development in accuracy occurred after 1656 with the invention of the pendulum clock by Christiaan Huygens. A major stimulus to improving the accuracy and reliability of clocks was the importance of precise time-keeping for navigation. The mechanism of a timepiece with a series of gears driven by a spring or weights is referred to as clockwork; the term is used by extension for a similar mechanism not used in a timepiece. The electric clock was patented in 1840, and electronic clocks were introduced in the 20th century, becoming widespread with the development of small battery-powered semiconductor devices.
The timekeeping element in every modern clock is a harmonic oscillator, a physical object (resonator) that vibrates or oscillates at a particular frequency.
This object can be a pendulum, a balance wheel, a tuning fork, a quartz crystal, or the vibration of electrons in atoms as they emit microwaves, the last of which is so precise that it serves as the definition of the second.
Clocks have different ways of displaying the time. Analog clocks indicate time with a traditional clock face and moving hands. Digital clocks display a numeric representation of time. Two numbering systems are in use: 12-hour time notation and 24-hour notation. Most digital clocks use electronic mechanisms and LCD, LED, or VFD displays. For the blind and for use over telephones, speaking clocks state the time audibly in words. There are also clocks for the blind that have displays that can be read by touch.
Etymology
The word clock derives from the medieval Latin word for 'bell'——and has cognates in many European languages. Clocks spread to England from the Low Countries, so the English word came from the Middle Low German and Middle Dutch .
The word is also derived from the Middle English , Old North French , or Middle Dutch , all of which mean 'bell'.
History of time-measuring devices
Sundials
The apparent position of the Sun in the sky changes over the course of each day, reflecting the rotation of the Earth. Shadows cast by stationary objects move correspondingly, so their positions can be used to indicate the time of day. A sundial shows the time by displaying the position of a shadow on a (usually) flat surface that has markings that correspond to the hours. Sundials can be horizontal, vertical, or in other orientations. Sundials were widely used in ancient times. With knowledge of latitude, a well-constructed sundial can measure local solar time with reasonable accuracy, within a minute or two. Sundials continued to be used to monitor the performance of clocks until the 1830s, when the use of the telegraph and trains standardized time and time zones between cities.
Devices that measure duration, elapsed time and intervals
Many devices can be used to mark the passage of time without respect to reference time (time of day, hours, minutes, etc.) and can be useful for measuring duration or intervals. Examples of such duration timers are candle clocks, incense clocks, and the hourglass. Both the candle clock and the incense clock work on the same principle, wherein the consumption of resources is more or less constant, allowing reasonably precise and repeatable estimates of time passages. In the hourglass, fine sand pouring through a tiny hole at a constant rate indicates an arbitrary, predetermined passage of time. The resource is not consumed, but re-used.
Water clocks
Water clocks, along with sundials, are possibly the oldest time-measuring instruments, with the only exception being the day-counting tally stick. Given their great antiquity, where and when they first existed is not known and is perhaps unknowable. The bowl-shaped outflow is the simplest form of a water clock and is known to have existed in Babylon and Egypt around the 16th century BC. Other regions of the world, including India and China, also have early evidence of water clocks, but the earliest dates are less certain. Some authors, however, write about water clocks appearing as early as 4000 BC in these regions of the world.
The Macedonian astronomer Andronicus of Cyrrhus supervised the construction of the Tower of the Winds in Athens in the 1st century BC, which housed a large clepsydra inside as well as multiple prominent sundials outside, allowing it to function as a kind of early clocktower. The Greek and Roman civilizations advanced water clock design with improved accuracy. These advances were passed on through Byzantine and Islamic times, eventually making their way back to Europe. Independently, the Chinese developed their own advanced water clocks () by 725 AD, passing their ideas on to Korea and Japan.
Some water clock designs were developed independently, and some knowledge was transferred through the spread of trade. Pre-modern societies do not have the same precise timekeeping requirements that exist in modern industrial societies, where every hour of work or rest is monitored and work may start or finish at any time regardless of external conditions. Instead, water clocks in ancient societies were used mainly for astrological reasons. These early water clocks were calibrated with a sundial. While never reaching the level of accuracy of a modern timepiece, the water clock was the most accurate and commonly used timekeeping device for millennia until it was replaced by the more accurate pendulum clock in 17th-century Europe.
Islamic civilization is credited with further advancing the accuracy of clocks through elaborate engineering. In 797 (or possibly 801), the Abbasid caliph of Baghdad, Harun al-Rashid, presented Charlemagne with an Asian elephant named Abul-Abbas together with a "particularly elaborate example" of a water clock. Pope Sylvester II introduced clocks to northern and western Europe around 1000 AD.
Mechanical water clocks
The first known geared clock was invented by the great mathematician, physicist, and engineer Archimedes during the 3rd century BC. Archimedes created his astronomical clock, which was also a cuckoo clock with birds singing and moving every hour. It is the first carillon clock as it plays music simultaneously with a person blinking his eyes, surprised by the singing birds. The Archimedes clock works with a system of four weights, counterweights, and strings regulated by a system of floats in a water container with siphons that regulate the automatic continuation of the clock. The principles of this type of clock are described by the mathematician and physicist Hero, who says that some of them work with a chain that turns a gear in the mechanism. Another Greek clock probably constructed at the time of Alexander was in Gaza, as described by Procopius. The Gaza clock was probably a Meteoroskopeion, i.e., a building showing celestial phenomena and the time. It had a pointer for the time and some automations similar to the Archimedes clock. There were 12 doors opening one every hour, with Hercules performing his labors, the Lion at one o'clock, etc., and at night a lamp becomes visible every hour, with 12 windows opening to show the time.
The Tang dynasty Buddhist monk Yi Xing along with government official Liang Lingzan made the escapement in 723 (or 725) to the workings of a water-powered armillary sphere and clock drive, which was the world's first clockwork escapement. The Song dynasty polymath and genius Su Song (1020–1101) incorporated it into his monumental innovation of the astronomical clock tower of Kaifeng in 1088. His astronomical clock and rotating armillary sphere still relied on the use of either flowing water during the spring, summer, and autumn seasons or liquid mercury during the freezing temperatures of winter (i.e., hydraulics).
In Su Song's waterwheel linkwork device, the action of the escapement's arrest and release was achieved by gravity exerted periodically as the continuous flow of liquid-filled containers of a limited size. In a single line of evolution, Su Song's clock therefore united the concepts of the clepsydra and the mechanical clock into one device run by mechanics and hydraulics. In his memorial, Su Song wrote about this concept:
According to your servant's opinion there have been many systems and designs for astronomical instruments during past dynasties all differing from one another in minor respects. But the principle of the use of water-power for the driving mechanism has always been the same. The heavens move without ceasing but so also does water flow (and fall). Thus if the water is made to pour with perfect evenness, then the comparison of the rotary movements (of the heavens and the machine) will show no discrepancy or contradiction; for the unresting follows the unceasing.
Song was also strongly influenced by the earlier armillary sphere created by Zhang Sixun (976 AD), who also employed the escapement mechanism and used liquid mercury instead of water in the waterwheel of his astronomical clock tower. The mechanical clockworks for Su Song's astronomical tower featured a great driving-wheel that was 11 feet in diameter, carrying 36 scoops, into each of which water was poured at a uniform rate from the "constant-level tank". The main driving shaft of iron, with its cylindrical necks supported on iron crescent-shaped bearings, ended in a pinion, which engaged a gear wheel at the lower end of the main vertical transmission shaft. This great astronomical hydromechanical clock tower was about ten metres high (about 30 feet), featured a clock escapement, and was indirectly powered by a rotating wheel either with falling water or liquid mercury. A full-sized working replica of Su Song's clock exists in the Republic of China (Taiwan)'s National Museum of Natural Science, Taichung city. This full-scale, fully functional replica, approximately 12 meters (39 feet) in height, was constructed from Su Song's original descriptions and mechanical drawings. The Chinese escapement spread west and was the source for Western escapement technology.
In the 12th century, Al-Jazari, an engineer from Mesopotamia (lived 1136–1206) who worked for the Artuqid king of Diyar-Bakr, Nasir al-Din, made numerous clocks of all shapes and sizes. The most reputed clocks included the elephant, scribe, and castle clocks, some of which have been successfully reconstructed. As well as telling the time, these grand clocks were symbols of the status, grandeur, and wealth of the Urtuq State. Knowledge of these mercury escapements may have spread through Europe with translations of Arabic and Spanish texts.
Fully mechanical
The word (from the Greek —'hour', and —'to tell') was used to describe early mechanical clocks, but the use of this word (still used in several Romance languages) for all timekeepers conceals the true nature of the mechanisms. For example, there is a record that in 1176, Sens Cathedral in France installed an 'horologe', but the mechanism used is unknown. According to Jocelyn de Brakelond, in 1198, during a fire at the abbey of St Edmundsbury (now Bury St Edmunds), the monks "ran to the clock" to fetch water, indicating that their water clock had a reservoir large enough to help extinguish the occasional fire. The word clock (via Medieval Latin from Old Irish , both meaning 'bell'), which gradually supersedes "horologe", suggests that it was the sound of bells that also characterized the prototype mechanical clocks that appeared during the 13th century in Europe.
In Europe, between 1280 and 1320, there was an increase in the number of references to clocks and horologes in church records, and this probably indicates that a new type of clock mechanism had been devised. Existing clock mechanisms that used water power were being adapted to take their driving power from falling weights. This power was controlled by some form of oscillating mechanism, probably derived from existing bell-ringing or alarm devices. This controlled release of power – the escapement – marks the beginning of the true mechanical clock, which differed from the previously mentioned cogwheel clocks. The verge escapement mechanism appeared during the surge of true mechanical clock development, which did not need any kind of fluid power, like water or mercury, to work.
These mechanical clocks were intended for two main purposes: for signalling and notification (e.g., the timing of services and public events) and for modeling the solar system. The former purpose is administrative; the latter arises naturally given the scholarly interests in astronomy, science, and astrology and how these subjects integrated with the religious philosophy of the time. The astrolabe was used both by astronomers and astrologers, and it was natural to apply a clockwork drive to the rotating plate to produce a working model of the solar system.
Simple clocks intended mainly for notification were installed in towers and did not always require faces or hands. They would have announced the canonical hours or intervals between set times of prayer. Canonical hours varied in length as the times of sunrise and sunset shifted. The more sophisticated astronomical clocks would have had moving dials or hands and would have shown the time in various time systems, including Italian hours, canonical hours, and time as measured by astronomers at the time. Both styles of clocks started acquiring extravagant features, such as automata.
In 1283, a large clock was installed at Dunstable Priory in Bedfordshire in southern England; its location above the rood screen suggests that it was not a water clock. In 1292, Canterbury Cathedral installed a 'great horloge'. Over the next 30 years, there were mentions of clocks at a number of ecclesiastical institutions in England, Italy, and France. In 1322, a new clock was installed in Norwich, an expensive replacement for an earlier clock installed in 1273. This had a large (2 metre) astronomical dial with automata and bells. The costs of the installation included the full-time employment of two clockkeepers for two years.
Astronomical
An elaborate water clock, the 'Cosmic Engine', was invented by Su Song, a Chinese polymath, designed and constructed in China in 1092. This great astronomical hydromechanical clock tower was about ten metres high (about 30 feet) and was indirectly powered by a rotating wheel with falling water and liquid mercury, which turned an armillary sphere capable of calculating complex astronomical problems.
In Europe, there were the clocks constructed by Richard of Wallingford in Albans by 1336, and by Giovanni de Dondi in Padua from 1348 to 1364. They no longer exist, but detailed descriptions of their design and construction survive, and modern reproductions have been made. They illustrate how quickly the theory of the mechanical clock had been translated into practical constructions, and also that one of the many impulses to their development had been the desire of astronomers to investigate celestial phenomena.
The Astrarium of Giovanni Dondi dell'Orologio was a complex astronomical clock built between 1348 and 1364 in Padua, Italy, by the doctor and clock-maker Giovanni Dondi dell'Orologio. The Astrarium had seven faces and 107 moving gears; it showed the positions of the sun, the moon and the five planets then known, as well as religious feast days. The astrarium stood about 1 metre high, and consisted of a seven-sided brass or iron framework resting on 7 decorative paw-shaped feet. The lower section provided a 24-hour dial and a large calendar drum, showing the fixed feasts of the church, the movable feasts, and the position in the zodiac of the moon's ascending node. The upper section contained 7 dials, each about 30 cm in diameter, showing the positional data for the Primum Mobile, Venus, Mercury, the moon, Saturn, Jupiter, and Mars. Directly above the 24-hour dial is the dial of the Primum Mobile, so called because it reproduces the diurnal motion of the stars and the annual motion of the sun against the background of stars. Each of the 'planetary' dials used complex clockwork to produce reasonably accurate models of the planets' motion. These agreed reasonably well both with Ptolemaic theory and with observations.
Wallingford's clock had a large astrolabe-type dial, showing the sun, the moon's age, phase, and node, a star map, and possibly the planets. In addition, it had a wheel of fortune and an indicator of the state of the tide at London Bridge. Bells rang every hour, the number of strokes indicating the time. Dondi's clock was a seven-sided construction, 1 metre high, with dials showing the time of day, including minutes, the motions of all the known planets, an automatic calendar of fixed and movable feasts, and an eclipse prediction hand rotating once every 18 years. It is not known how accurate or reliable these clocks would have been. They were probably adjusted manually every day to compensate for errors caused by wear and imprecise manufacture. Water clocks are sometimes still used today, and can be examined in places such as ancient castles and museums. The Salisbury Cathedral clock, built in 1386, is considered to be the world's oldest surviving mechanical clock that strikes the hours.
Spring-driven
Clockmakers developed their art in various ways. Building smaller clocks was a technical challenge, as was improving accuracy and reliability. Clocks could be impressive showpieces to demonstrate skilled craftsmanship, or less expensive, mass-produced items for domestic use. The escapement in particular was an important factor affecting the clock's accuracy, so many different mechanisms were tried.
Spring-driven clocks appeared during the 15th century, although they are often erroneously credited to Nuremberg watchmaker Peter Henlein (or Henle, or Hele) around 1511. The earliest existing spring driven clock is the chamber clock given to Phillip the Good, Duke of Burgundy, around 1430, now in the Germanisches Nationalmuseum. Spring power presented clockmakers with a new problem: how to keep the clock movement running at a constant rate as the spring ran down. This resulted in the invention of the stackfreed and the fusee in the 15th century, and many other innovations, down to the invention of the modern going barrel in 1760.
Early clock dials did not indicate minutes and seconds. A clock with a dial indicating minutes was illustrated in a 1475 manuscript by Paulus Almanus, and some 15th-century clocks in Germany indicated minutes and seconds.
An early record of a seconds hand on a clock dates back to about 1560 on a clock now in the Fremersdorf collection.
During the 15th and 16th centuries, clockmaking flourished, particularly in the metalworking towns of Nuremberg and Augsburg, and in Blois, France. Some of the more basic table clocks have only one time-keeping hand, with the dial between the hour markers being divided into four equal parts making the clocks readable to the nearest 15 minutes. Other clocks were exhibitions of craftsmanship and skill, incorporating astronomical indicators and musical movements. The cross-beat escapement was invented in 1584 by Jost Bürgi, who also developed the remontoire. Bürgi's clocks were a great improvement in accuracy as they were correct to within a minute a day. These clocks helped the 16th-century astronomer Tycho Brahe to observe astronomical events with much greater precision than before.
Pendulum
The next development in accuracy occurred after 1656 with the invention of the pendulum clock. Galileo had the idea to use a swinging bob to regulate the motion of a time-telling device earlier in the 17th century. Christiaan Huygens, however, is usually credited as the inventor. He determined the mathematical formula that related pendulum length to time (about 99.4 cm or 39.1 inches for the one second movement) and had the first pendulum-driven clock made. The first model clock was built in 1657 in the Hague, but it was in England that the idea was taken up. The longcase clock (also known as the grandfather clock) was created to house the pendulum and works by the English clockmaker William Clement in 1670 or 1671. It was also at this time that clock cases began to be made of wood and clock faces to use enamel as well as hand-painted ceramics.
In 1670, William Clement created the anchor escapement, an improvement over Huygens' crown escapement. Clement also introduced the pendulum suspension spring in 1671. The concentric minute hand was added to the clock by Daniel Quare, a London clockmaker and others, and the second hand was first introduced.
Hairspring
In 1675, Huygens and Robert Hooke invented the spiral balance spring, or the hairspring, designed to control the oscillating speed of the balance wheel. This crucial advance finally made accurate pocket watches possible. The great English clockmaker Thomas Tompion, was one of the first to use this mechanism successfully in his pocket watches, and he adopted the minute hand which, after a variety of designs were trialled, eventually stabilised into the modern-day configuration. The rack and snail striking mechanism for striking clocks, was introduced during the 17th century and had distinct advantages over the 'countwheel' (or 'locking plate') mechanism. During the 20th century there was a common misconception that Edward Barlow invented rack and snail striking. In fact, his invention was connected with a repeating mechanism employing the rack and snail. The repeating clock, that chimes the number of hours (or even minutes) on demand was invented by either Quare or Barlow in 1676. George Graham invented the deadbeat escapement for clocks in 1720.
Marine chronometer
A major stimulus to improving the accuracy and reliability of clocks was the importance of precise time-keeping for navigation. The position of a ship at sea could be determined with reasonable accuracy if a navigator could refer to a clock that lost or gained less than about 10 seconds per day. This clock could not contain a pendulum, which would be virtually useless on a rocking ship. In 1714, the British government offered large financial rewards to the value of 20,000 pounds for anyone who could determine longitude accurately. John Harrison, who dedicated his life to improving the accuracy of his clocks, later received considerable sums under the Longitude Act.
In 1735, Harrison built his first chronometer, which he steadily improved on over the next thirty years before submitting it for examination. The clock had many innovations, including the use of bearings to reduce friction, weighted balances to compensate for the ship's pitch and roll in the sea and the use of two different metals to reduce the problem of expansion from heat. The chronometer was tested in 1761 by Harrison's son and by the end of 10 weeks the clock was in error by less than 5 seconds.
Mass production
The British had dominated watch manufacture for much of the 17th and 18th centuries, but maintained a system of production that was geared towards high quality products for the elite. Although there was an attempt to modernise clock manufacture with mass-production techniques and the application of duplicating tools and machinery by the British Watch Company in 1843, it was in the United States that this system took off. In 1816, Eli Terry and some other Connecticut clockmakers developed a way of mass-producing clocks by using interchangeable parts. Aaron Lufkin Dennison started a factory in 1851 in Massachusetts that also used interchangeable parts, and by 1861 was running a successful enterprise incorporated as the Waltham Watch Company.
Early electric
In 1815, the English scientist Francis Ronalds published the first electric clock powered by dry pile batteries. Alexander Bain, a Scottish clockmaker, patented the electric clock in 1840. The electric clock's mainspring is wound either with an electric motor or with an electromagnet and armature. In 1841, he first patented the electromagnetic pendulum. By the end of the nineteenth century, the advent of the dry cell battery made it feasible to use electric power in clocks. Spring or weight driven clocks that use electricity, either alternating current (AC) or direct current (DC), to rewind the spring or raise the weight of a mechanical clock would be classified as an electromechanical clock. This classification would also apply to clocks that employ an electrical impulse to propel the pendulum. In electromechanical clocks the electricity serves no time keeping function. These types of clocks were made as individual timepieces but more commonly used in synchronized time installations in schools, businesses, factories, railroads and government facilities as a master clock and slave clocks.
Where an AC electrical supply of stable frequency is available, timekeeping can be maintained very reliably by using a synchronous motor, essentially counting the cycles. The supply current alternates with an accurate frequency of 50 hertz in many countries, and 60 hertz in others. While the frequency may vary slightly during the day as the load changes, generators are designed to maintain an accurate number of cycles over a day, so the clock may be a fraction of a second slow or fast at any time, but will be perfectly accurate over a long time. The rotor of the motor rotates at a speed that is related to the alternation frequency. Appropriate gearing converts this rotation speed to the correct ones for the hands of the analog clock. Time in these cases is measured in several ways, such as by counting the cycles of the AC supply, vibration of a tuning fork, the behaviour of quartz crystals, or the quantum vibrations of atoms. Electronic circuits divide these high-frequency oscillations to slower ones that drive the time display.
Quartz
The piezoelectric properties of crystalline quartz were discovered by Jacques and Pierre Curie in 1880. The first crystal oscillator was invented in 1917 by Alexander M. Nicholson, after which the first quartz crystal oscillator was built by Walter G. Cady in 1921. In 1927 the first quartz clock was built by Warren Marrison and J.W. Horton at Bell Telephone Laboratories in Canada. The following decades saw the development of quartz clocks as precision time measurement devices in laboratory settings—the bulky and delicate counting electronics, built with vacuum tubes at the time, limited their practical use elsewhere. The National Bureau of Standards (now NIST) based the time standard of the United States on quartz clocks from late 1929 until the 1960s, when it changed to atomic clocks. In 1969, Seiko produced the world's first quartz wristwatch, the Astron. Their inherent accuracy and low cost of production resulted in the subsequent proliferation of quartz clocks and watches.
Atomic
Currently, atomic clocks are the most accurate clocks in existence. They are considerably more accurate than quartz clocks as they can be accurate to within a few seconds over trillions of years. Atomic clocks were first theorized by Lord Kelvin in 1879. In the 1930s the development of magnetic resonance created practical method for doing this. A prototype ammonia maser device was built in 1949 at the U.S. National Bureau of Standards (NBS, now NIST). Although it was less accurate than existing quartz clocks, it served to demonstrate the concept. The first accurate atomic clock, a caesium standard based on a certain transition of the caesium-133 atom, was built by Louis Essen in 1955 at the National Physical Laboratory in the UK. Calibration of the caesium standard atomic clock was carried out by the use of the astronomical time scale ephemeris time (ET). As of 2013, the most stable atomic clocks are ytterbium clocks, which are stable to within less than two parts in 1 quintillion ().
Operation
The invention of the mechanical clock in the 13th century initiated a change in timekeeping methods from continuous processes, such as the motion of the gnomon's shadow on a sundial or the flow of liquid in a water clock, to periodic oscillatory processes, such as the swing of a pendulum or the vibration of a quartz crystal, which had the potential for more accuracy. All modern clocks use oscillation.
Although the mechanisms they use vary, all oscillating clocks, mechanical, electric, and atomic, work similarly and can be divided into analogous parts. They consist of an object that repeats the same motion over and over again, an oscillator, with a precisely constant time interval between each repetition, or 'beat'. Attached to the oscillator is a controller device, which sustains the oscillator's motion by replacing the energy it loses to friction, and converts its oscillations into a series of pulses. The pulses are then counted by some type of counter, and the number of counts is converted into convenient units, usually seconds, minutes, hours, etc. Finally some kind of indicator displays the result in human readable form.
Power source
Oscillator
The timekeeping element in every modern clock is a harmonic oscillator, a physical object (resonator) that vibrates or oscillates repetitively at a precisely constant frequency.
In mechanical clocks, this is either a pendulum or a balance wheel.
In some early electronic clocks and watches such as the Accutron, they use a tuning fork.
In quartz clocks and watches, it is a quartz crystal.
In atomic clocks, it is the vibration of electrons in atoms as they emit microwaves.
In early mechanical clocks before 1657, it was a crude balance wheel or foliot which was not a harmonic oscillator because it lacked a balance spring. As a result, they were very inaccurate, with errors of perhaps an hour a day.
The advantage of a harmonic oscillator over other forms of oscillator is that it employs resonance to vibrate at a precise natural resonant frequency or "beat" dependent only on its physical characteristics, and resists vibrating at other rates. The possible precision achievable by a harmonic oscillator is measured by a parameter called its Q, or quality factor, which increases (other things being equal) with its resonant frequency. This is why there has been a long-term trend toward higher frequency oscillators in clocks. Balance wheels and pendulums always include a means of adjusting the rate of the timepiece. Quartz timepieces sometimes include a rate screw that adjusts a capacitor for that purpose. Atomic clocks are primary standards, and their rate cannot be adjusted.
Synchronized or slave clocks
Some clocks rely for their accuracy on an external oscillator; that is, they are automatically synchronized to a more accurate clock:
Slave clocks, used in large institutions and schools from the 1860s to the 1970s, kept time with a pendulum, but were wired to a master clock in the building, and periodically received a signal to synchronize them with the master, often on the hour. Later versions without pendulums were triggered by a pulse from the master clock and certain sequences used to force rapid synchronization following a power failure.
Synchronous electric clocks do not have an internal oscillator, but count cycles of the 50 or 60 Hz oscillation of the AC power line, which is synchronized by the utility to a precision oscillator. The counting may be done electronically, usually in clocks with digital displays, or, in analog clocks, the AC may drive a synchronous motor which rotates an exact fraction of a revolution for every cycle of the line voltage, and drives the gear train. Although changes in the grid line frequency due to load variations may cause the clock to temporarily gain or lose several seconds during the course of a day, the total number of cycles per 24 hours is maintained extremely accurately by the utility company, so that the clock keeps time accurately over long periods.
Computer real-time clocks keep time with a quartz crystal, but can be periodically (usually weekly) synchronized over the Internet to atomic clocks (UTC), using the Network Time Protocol (NTP).
Radio clocks keep time with a quartz crystal, but are periodically synchronized to time signals transmitted from dedicated standard time radio stations or satellite navigation signals, which are set by atomic clocks.
Controller
This has the dual function of keeping the oscillator running by giving it 'pushes' to replace the energy lost to friction, and converting its vibrations into a series of pulses that serve to measure the time.
In mechanical clocks, this is the escapement, which gives precise pushes to the swinging pendulum or balance wheel, and releases one gear tooth of the escape wheel at each swing, allowing all the clock's wheels to move forward a fixed amount with each swing.
In electronic clocks this is an electronic oscillator circuit that gives the vibrating quartz crystal or tuning fork tiny 'pushes', and generates a series of electrical pulses, one for each vibration of the crystal, which is called the clock signal.
In atomic clocks the controller is an evacuated microwave cavity attached to a microwave oscillator controlled by a microprocessor. A thin gas of caesium atoms is released into the cavity where they are exposed to microwaves. A laser measures how many atoms have absorbed the microwaves, and an electronic feedback control system called a phase-locked loop tunes the microwave oscillator until it is at the frequency that causes the atoms to vibrate and absorb the microwaves. Then the microwave signal is divided by digital counters to become the clock signal.
In mechanical clocks, the low Q of the balance wheel or pendulum oscillator made them very sensitive to the disturbing effect of the impulses of the escapement, so the escapement had a great effect on the accuracy of the clock, and many escapement designs were tried. The higher Q of resonators in electronic clocks makes them relatively insensitive to the disturbing effects of the drive power, so the driving oscillator circuit is a much less critical component.
Counter chain
This counts the pulses and adds them up to get traditional time units of seconds, minutes, hours, etc. It usually has a provision for setting the clock by manually entering the correct time into the counter.
In mechanical clocks this is done mechanically by a gear train, known as the wheel train. The gear train also has a second function; to transmit mechanical power from the power source to run the oscillator. There is a friction coupling called the 'cannon pinion' between the gears driving the hands and the rest of the clock, allowing the hands to be turned to set the time.
In digital clocks a series of integrated circuit counters or dividers add the pulses up digitally, using binary logic. Often pushbuttons on the case allow the hour and minute counters to be incremented and decremented to set the time.
Indicator
This displays the count of seconds, minutes, hours, etc. in a human readable form.
The earliest mechanical clocks in the 13th century did not have a visual indicator and signalled the time audibly by striking bells. Many clocks to this day are striking clocks which strike the hour.
Analog clocks display time with an analog clock face, which consists of a dial with the numbers 1 through 12 or 24, the hours in the day, around the outside. The hours are indicated with an hour hand, which makes one or two revolutions in a day, while the minutes are indicated by a minute hand, which makes one revolution per hour. In mechanical clocks a gear train drives the hands; in electronic clocks the circuit produces pulses every second which drive a stepper motor and gear train, which move the hands.
Digital clocks display the time in periodically changing digits on a digital display. A common misconception is that a digital clock is more accurate than an analog wall clock, but the indicator type is separate and apart from the accuracy of the timing source.
Talking clocks and the speaking clock services provided by telephone companies speak the time audibly, using either recorded or digitally synthesized voices.
Types
Clocks can be classified by the type of time display, as well as by the method of timekeeping.
Time display methods
Analog
Analog clocks usually use a clock face which indicates time using rotating pointers called "hands" on a fixed numbered dial or dials. The standard clock face, known universally throughout the world, has a short "hour hand" which indicates the hour on a circular dial of 12 hours, making two revolutions per day, and a longer "minute hand" which indicates the minutes in the current hour on the same dial, which is also divided into 60 minutes. It may also have a "second hand" which indicates the seconds in the current minute. The only other widely used clock face today is the 24 hour analog dial, because of the use of 24 hour time in military organizations and timetables. Before the modern clock face was standardized during the Industrial Revolution, many other face designs were used throughout the years, including dials divided into 6, 8, 10, and 24 hours. During the French Revolution the French government tried to introduce a 10-hour clock, as part of their decimal-based metric system of measurement, but it did not achieve widespread use. An Italian 6 hour clock was developed in the 18th century, presumably to save power (a clock or watch striking 24 times uses more power).
Another type of analog clock is the sundial, which tracks the sun continuously, registering the time by the shadow position of its gnomon. Because the sun does not adjust to daylight saving time, users must add an hour during that time. Corrections must also be made for the equation of time, and for the difference between the longitudes of the sundial and of the central meridian of the time zone that is being used (i.e. 15 degrees east of the prime meridian for each hour that the time zone is ahead of GMT). Sundials use some or part of the 24 hour analog dial. There also exist clocks which use a digital display despite having an analog mechanism—these are commonly referred to as flip clocks. Alternative systems have been proposed. For example, the "Twelv" clock indicates the current hour using one of twelve colors, and indicates the minute by showing a proportion of a circular disk, similar to a moon phase.
Digital
Digital clocks display a numeric representation of time. Two numeric display formats are commonly used on digital clocks:
the 24-hour notation with hours ranging 00–23;
the 12-hour notation with AM/PM indicator, with hours indicated as 12AM, followed by 1AM–11AM, followed by 12PM, followed by 1PM–11PM (a notation mostly used in domestic environments).
Most digital clocks use electronic mechanisms and LCD, LED, or VFD displays; many other display technologies are used as well (cathode-ray tubes, nixie tubes, etc.). After a reset, battery change or power failure, these clocks without a backup battery or capacitor either start counting from 12:00, or stay at 12:00, often with blinking digits indicating that the time needs to be set. Some newer clocks will reset themselves based on radio or Internet time servers that are tuned to national atomic clocks. Since the introduction of digital clocks in the 1960s, there has been a notable decline in the use of analog clocks.
Some clocks, called 'flip clocks', have digital displays that work mechanically. The digits are painted on sheets of material which are mounted like the pages of a book. Once a minute, a page is turned over to reveal the next digit. These displays are usually easier to read in brightly lit conditions than LCDs or LEDs. Also, they do not go back to 12:00 after a power interruption. Flip clocks generally do not have electronic mechanisms. Usually, they are driven by AC-synchronous motors.
Hybrid (analog-digital)
Clocks with analog quadrants, with a digital component, usually minutes and hours displayed analogously and seconds displayed in digital mode.
Auditory
For convenience, distance, telephony or blindness, auditory clocks present the time as sounds. The sound is either spoken natural language, (e.g. "The time is twelve thirty-five"), or as auditory codes (e.g. number of sequential bell rings on the hour represents the number of the hour like the bell, Big Ben). Most telecommunication companies also provide a speaking clock service as well.
Word
Word clocks are clocks that display the time visually using sentences. E.g.: "It's about three o'clock." These clocks can be implemented in hardware or software.
Projection
Some clocks, usually digital ones, include an optical projector that shines a magnified image of the time display onto a screen or onto a surface such as an indoor ceiling or wall. The digits are large enough to be easily read, without using glasses, by persons with moderately imperfect vision, so the clocks are convenient for use in their bedrooms. Usually, the timekeeping circuitry has a battery as a backup source for an uninterrupted power supply to keep the clock on time, while the projection light only works when the unit is connected to an A.C. supply. Completely battery-powered portable versions resembling flashlights are also available.
Tactile
Auditory and projection clocks can be used by people who are blind or have limited vision. There are also clocks for the blind that have displays that can be read by using the sense of touch. Some of these are similar to normal analog displays, but are constructed so the hands can be felt without damaging them. Another type is essentially digital, and uses devices that use a code such as Braille to show the digits so that they can be felt with the fingertips.
Multi-display
Some clocks have several displays driven by a single mechanism, and some others have several completely separate mechanisms in a single case. Clocks in public places often have several faces visible from different directions, so that the clock can be read from anywhere in the vicinity; all the faces show the same time. Other clocks show the current time in several time-zones. Watches that are intended to be carried by travellers often have two displays, one for the local time and the other for the time at home, which is useful for making pre-arranged phone calls. Some equation clocks have two displays, one showing mean time and the other solar time, as would be shown by a sundial. Some clocks have both analog and digital displays. Clocks with Braille displays usually also have conventional digits so they can be read by sighted people.
Purposes
Clocks are in homes, offices and many other places; smaller ones (watches) are carried on the wrist or in a pocket; larger ones are in public places, e.g. a railway station or church. A small clock is often shown in a corner of computer displays, mobile phones and many MP3 players.
The primary purpose of a clock is to display the time. Clocks may also have the facility to make a loud alert signal at a specified time, typically to waken a sleeper at a preset time; they are referred to as alarm clocks. The alarm may start at a low volume and become louder, or have the facility to be switched off for a few minutes then resume. Alarm clocks with visible indicators are sometimes used to indicate to children too young to read the time that the time for sleep has finished; they are sometimes called training clocks.
A clock mechanism may be used to control a device according to time, e.g. a central heating system, a VCR, or a time bomb (see: digital counter). Such mechanisms are usually called timers. Clock mechanisms are also used to drive devices such as solar trackers and astronomical telescopes, which have to turn at accurately controlled speeds to counteract the rotation of the Earth.
Most digital computers depend on an internal signal at constant frequency to synchronize processing; this is referred to as a clock signal. (A few research projects are developing CPUs based on asynchronous circuits.) Some equipment, including computers, also maintains time and date for use as required; this is referred to as time-of-day clock, and is distinct from the system clock signal, although possibly based on counting its cycles.
Time standards
For some scientific work timing of the utmost accuracy is essential. It is also necessary to have a standard of the maximum accuracy against which working clocks can be calibrated. An ideal clock would give the time to unlimited accuracy, but this is not realisable. Many physical processes, in particular including some transitions between atomic energy levels, occur at exceedingly stable frequency; counting cycles of such a process can give a very accurate and consistent time—clocks which work this way are usually called atomic clocks. Such clocks are typically large, very expensive, require a controlled environment, and are far more accurate than required for most purposes; they are typically used in a standards laboratory.
Navigation
Until advances in the late twentieth century, navigation depended on the ability to measure latitude and longitude. Latitude can be determined through celestial navigation; the measurement of longitude requires accurate knowledge of time. This need was a major motivation for the development of accurate mechanical clocks. John Harrison created the first highly accurate marine chronometer in the mid-18th century. The Noon gun in Cape Town still fires an accurate signal to allow ships to check their chronometers. Many buildings near major ports used to have (some still do) a large ball mounted on a tower or mast arranged to drop at a pre-determined time, for the same purpose. While satellite navigation systems such as GPS require unprecedentedly accurate knowledge of time, this is supplied by equipment on the satellites; vehicles no longer need timekeeping equipment.
Sports and games
Clocks can be used to measure varying periods of time in games and sports. Stopwatches can be used to time the performance of track athletes. Chess clocks are used to limit the board game players' time to make a move. In various sports, measure the duration the game or subdivisions of the game, while other clocks may be used for tracking different durations; these include play clocks, shot clocks, and pitch clocks.
Culture
Folklore and superstition
In the United Kingdom, clocks are associated with various beliefs, many involving death or bad luck. In legends, clocks have reportedly stopped of their own accord upon a nearby person's death, especially those of monarchs. The clock in the House of Lords supposedly stopped at "nearly" the hour of George III's death in 1820, the one at Balmoral Castle stopped during the hour of Queen Victoria's death, and similar legends are related about clocks associated with William IV and Elizabeth I. Many superstitions exist about clocks. One stopping before a person has died may foretell coming death. Similarly, if a clock strikes during a church hymn or a marriage ceremony, death or calamity is prefigured for the parishioners or a spouse, respectively. Death or ill events are foreshadowed if a clock strikes the wrong time. It may also be unlucky to have a clock face a fire or to speak while a clock is striking.
In Chinese culture, giving a clock () is often taboo, especially to the elderly, as it is a homophone of the act of attending another's funeral ().
Specific types
Awards
(GPHG)
See also
24-hour analog dial
Allan variance
Allen-Bradley Clock Tower at Rockwell Automation Headquarters Building (Wisconsin)
American Watchmakers-Clockmakers Institute
BaselWorld
Biological clock
Clockarium
The clock as herald of the Industrial Revolution (Lewis Mumford)
Clock drift
Clock ident
Clock network
Clock of the Long Now
Colgate Clock (Indiana)
Colgate Clock (New Jersey), largest clock in US
Cosmo Clock 21, world's largest clock
Cox's timepiece
Cuckooland Museum
Date and time representation by country
Debt clock
Le Défenseur du Temps (automata)
Department of Defense master clock (U.S.)
Doomsday Clock
Earth clock
Federation of the Swiss Watch Industry FH
Guard tour patrol system (watchclocks)
Iron Ring Clock
Jens Olsen's World Clock
Jewel bearing
List of biggest clock faces
List of international common standards
List of largest cuckoo clocks
National Association of Watch and Clock Collectors
Replica watch
Rubik's Clock
Star clock
Singing bird box
System time
Timeline of time measurement technology
Watchmaker
Notes and references
Bibliography
Baillie, G.H., O. Clutton, & C.A. Ilbert. Britten's Old Clocks and Watches and Their Makers (7th ed.). Bonanza Books (1956).
Bolter, David J. Turing's Man: Western Culture in the Computer Age. The University of North Carolina Press, Chapel Hill, NC (1984). pbk. Summary of the role of "the clock" in its setting the direction of philosophic movement for the "Western World". Cf. picture on p. 25 showing the verge and foliot. Bolton derived the picture from Macey, p. 20.
Edey, Winthrop. French Clocks. New York: Walker & Co. (1967).
Kak, Subhash, Babylonian and Indian Astronomy: Early Connections. 2003.
Kumar, Narendra "Science in Ancient India" (2004). .
Landes, David S. Revolution in Time: Clocks and the Making of the Modern World. Cambridge: Harvard University Press (1983).
Landes, David S. Clocks & the Wealth of Nations, Daedalus Journal, Spring 2003.
Lloyd, Alan H. "Mechanical Timekeepers", A History of Technology, Vol. III. Edited by Charles Joseph Singer et al. Oxford: Clarendon Press (1957), pp. 648–675.
Macey, Samuel L., Clocks and the Cosmos: Time in Western Life and Thought, Archon Books, Hamden, Conn. (1980).
North, John. God's Clockmaker: Richard of Wallingford and the Invention of Time. London: Hambledon and London (2005).
Opie, Iona, & Moira Tatem. "A Dictionary of Superstitions". Oxford: Oxford University Press (1990).
Palmer, Brooks. The Book of American Clocks, The Macmillan Co. (1979).
Robinson, Tom. The Longcase Clock. Suffolk, England: Antique Collector's Club (1981).
Smith, Alan. The International Dictionary of Clocks. London: Chancellor Press (1996).
Tardy. French Clocks the World Over. Part I and II. Translated with the assistance of Alexander Ballantyne. Paris: Tardy (1981).
Yoder, Joella Gerstmeyer. Unrolling Time: Christiaan Huygens and the Mathematization of Nature. New York: Cambridge University Press (1988).
Zea, Philip, & Robert Cheney. Clock Making in New England: 1725–1825. Old Sturbridge Village (1992).
External links
National Association of Watch & Clock Collectors Museum
Blackboard clock
Time measurement systems
Articles containing video clips | Clock | [
"Physics",
"Technology",
"Engineering"
] | 10,644 | [
"Machines",
"Physical quantities",
"Time",
"Time measurement systems",
"Clocks",
"Measuring instruments",
"Physical systems",
"Spacetime"
] |
6,512 | https://en.wikipedia.org/wiki/Coercion | Coercion involves compelling a party to act in an involuntary manner through the use of threats, including threats to use force against that party. It involves a set of forceful actions which violate the free will of an individual in order to induce a desired response. These actions may include extortion, blackmail, or even torture and sexual assault. Common-law systems codify the act of violating a law while under coercion as a duress crime.
Coercion used as leverage may force victims to act in a way contrary to their own interests. Coercion can involve not only the infliction of bodily harm, but also psychological abuse (the latter intended to enhance the perceived credibility of the threat). The threat of further harm may also lead to the acquiescence of the person being coerced. The concepts of coercion and persuasion are similar, but various factors distinguish the two. These include the intent, the willingness to cause harm, the result of the interaction, and the options available to the coerced party.
Political authors such as John Rawls, Thomas Nagel, and Ronald Dworkin contend whether governments are inherently coercive. In 1919, Max Weber (1864–1920), building on the view of Ihering (1818–1892), defined a state as "a human community that (successfully) claims a monopoly on the legitimate use of physical force". Morris argues that the state can operate through incentives rather than coercion. Healthcare systems may use informal coercion to make a patient adhere to a doctor's treatment plan. Under certain circumstances, medical staff may use physical coercion to treat a patient involuntarily.
Overview
The purpose of coercion is to substitute one's aims with weaker ones that the aggressor wants the victim to have. For this reason, many social philosophers have considered coercion as the polar opposite to freedom. Various forms of coercion are distinguished: first on the basis of the kind of injury threatened, second according to its aims and scope, and finally according to its effects, from which its legal, social, and ethical implications mostly depend.
Physical
Physical coercion is the most commonly considered form of coercion, where the content of the conditional threat is the use of force against a victim, their relatives or property. An often used example is "putting a gun to someone's head" (at gunpoint) or putting a "knife under the throat" (at knifepoint or cut-throat) to compel action under the threat that non-compliance may result in the attacker harming or even killing the victim. These are so common that they are also used as metaphors for other forms of coercion.
Armed forces in many countries use firing squads to maintain discipline and intimidate the masses, or opposition, into submission or silent compliance. However, there also are nonphysical forms of coercion, where the threatened injury does not immediately imply the use of force. Byman and Waxman (2000) define coercion as "the use of threatened force, including the limited use of actual force to back up the threat, to induce an adversary to behave differently than it otherwise would." Coercion does not in many cases amount to destruction of property or life since compliance is the goal.
Pain compliance
Psychological
In psychological coercion, the threatened injury regards the victim's relationships with other people. The most obvious example is blackmail, where the threat consists of the dissemination of damaging information. However, many other types are possible e.g. "emotional blackmail", which typically involves threats of rejection from or disapproval by a peer-group, or creating feelings of guilt/obligation via a display of anger or hurt by someone whom the victim loves or respects.
See also
Notes
References
Lifton, Robert J. (1961) Thought Reform and the Psychology of Totalism, Penguin Books.
External links
.
Carter, Barry E. Economic Coercion, Max Planck Encyclopedia of Public International Law (subscription required)
Abuse
Authority
Harassment and bullying
Legal terminology
Psychological abuse
Interrogation techniques
Power (social and political) concepts
Concepts in political philosophy | Coercion | [
"Biology"
] | 845 | [
"Behavior",
"Abuse",
"Harassment and bullying",
"Aggression",
"Human behavior"
] |
6,513 | https://en.wikipedia.org/wiki/Client%E2%80%93server%20model | The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may be on the same device. A server host runs one or more server programs, which share their resources with clients. A client usually does not share any of its resources, but it requests content or service from a server. Clients, therefore, initiate communication sessions with servers, which await incoming requests.
Examples of computer applications that use the client–server model are email, network printing, and the World Wide Web.
Client and server role
The server component provides a function or service to one or many clients, which initiate requests for such services.
Servers are classified by the services they provide. For example, a web server serves web pages and a file server serves computer files. A shared resource may be any of the server computer's software and electronic components, from programs and data to processors and storage devices. The sharing of resources of a server constitutes a service.
Whether a computer is a client, a server, or both, is determined by the nature of the application that requires the service functions. For example, a single computer can run a web server and file server software at the same time to serve different data to clients making different kinds of requests. The client software can also communicate with server software within the same computer. Communication between servers, such as to synchronize data, is sometimes called inter-server or server-to-server communication.
Client and server communication
Generally, a service is an abstraction of computer resources and a client does not have to be concerned with how the server performs while fulfilling the request and delivering the response. The client only has to understand the response based on the relevant application protocol, i.e. the content and the formatting of the data for the requested service.
Clients and servers exchange messages in a request–response messaging pattern. The client sends a request, and the server returns a response. This exchange of messages is an example of inter-process communication. To communicate, the computers must have a common language, and they must follow rules so that both the client and the server know what to expect. The language and rules of communication are defined in a communications protocol. All protocols operate in the application layer. The application layer protocol defines the basic patterns of the dialogue. To formalize the data exchange even further, the server may implement an application programming interface (API). The API is an abstraction layer for accessing a service. By restricting communication to a specific content format, it facilitates parsing. By abstracting access, it facilitates cross-platform data exchange.
A server may receive requests from many distinct clients in a short period. A computer can only perform a limited number of tasks at any moment, and relies on a scheduling system to prioritize incoming requests from clients to accommodate them. To prevent abuse and maximize availability, the server software may limit the availability to clients. Denial of service attacks are designed to exploit a server's obligation to process requests by overloading it with excessive request rates.
Encryption should be applied if sensitive information is to be communicated between the client and the server.
Example
When a bank customer accesses online banking services with a web browser (the client), the client initiates a request to the bank's web server. The customer's login credentials may be stored in a database, and the webserver accesses the database server as a client. An application server interprets the returned data by applying the bank's business logic and provides the output to the webserver. Finally, the webserver returns the result to the client web browser for display.
In each step of this sequence of client–server message exchanges, a computer processes a request and returns data. This is the request-response messaging pattern. When all the requests are met, the sequence is complete and the web browser presents the data to the customer.
This example illustrates a design pattern applicable to the client–server model: separation of concerns.
Server-side
Server-side refers to programs and operations that run on the server. This is in contrast to client-side programs and operations which run on the client. (See below)
General concepts
"Server-side software" refers to a computer application, such as a web server, that runs on remote server hardware, reachable from a user's local computer, smartphone, or other device. Operations may be performed server-side because they require access to information or functionality that is not available on the client, or because performing such operations on the client side would be slow, unreliable, or insecure.
Client and server programs may be commonly available ones such as free or commercial web servers and web browsers, communicating with each other using standardized protocols. Or, programmers may write their own server, client, and communications protocol which can only be used with one another.
Server-side operations include both those that are carried out in response to client requests, and non-client-oriented operations such as maintenance tasks.
Computer security
In a computer security context, server-side vulnerabilities or attacks refer to those that occur on a server computer system, rather than on the client side, or in between the two. For example, an attacker might exploit an SQL injection vulnerability in a web application in order to maliciously change or gain unauthorized access to data in the server's database. Alternatively, an attacker might break into a server system using vulnerabilities in the underlying operating system and then be able to access database and other files in the same manner as authorized administrators of the server.
Examples
In the case of distributed computing projects such as SETI@home and the Great Internet Mersenne Prime Search, while the bulk of the operations occur on the client side, the servers are responsible for coordinating the clients, sending them data to analyze, receiving and storing results, providing reporting functionality to project administrators, etc. In the case of an Internet-dependent user application like Google Earth, while querying and display of map data takes place on the client side, the server is responsible for permanent storage of map data, resolving user queries into map data to be returned to the client, etc.
In the context of the World Wide Web, commonly encountered server-side computer languages include:
C# or Visual Basic in ASP.NET environments
Java
Perl
PHP
Python
Ruby
Node.js
Swift
However, web applications and services can be implemented in almost any language, as long as they can return data to standards-based web browsers (possibly via intermediary programs) in formats which they can use.
Client side
Client-side refers to operations that are performed by the client in a computer network.
General concepts
Typically, a client is a computer application, such as a web browser, that runs on a user's local computer, smartphone, or other device, and connects to a server as necessary. Operations may be performed client-side because they require access to information or functionality that is available on the client but not on the server, because the user needs to observe the operations or provide input, or because the server lacks the processing power to perform the operations in a timely manner for all of the clients it serves. Additionally, if operations can be performed by the client, without sending data over the network, they may take less time, use less bandwidth, and incur a lesser security risk.
When the server serves data in a commonly used manner, for example according to standard protocols such as HTTP or FTP, users may have their choice of a number of client programs (e.g. most modern web browsers can request and receive data using both HTTP and FTP). In the case of more specialized applications, programmers may write their own server, client, and communications protocol which can only be used with one another.
Programs that run on a user's local computer without ever sending or receiving data over a network are not considered clients, and so the operations of such programs would not be termed client-side operations.
Computer security
In a computer security context, client-side vulnerabilities or attacks refer to those that occur on the client / user's computer system, rather than on the server side, or in between the two. As an example, if a server contained an encrypted file or message which could only be decrypted using a key housed on the user's computer system, a client-side attack would normally be an attacker's only opportunity to gain access to the decrypted contents. For instance, the attacker might cause malware to be installed on the client system, allowing the attacker to view the user's screen, record the user's keystrokes, and steal copies of the user's encryption keys, etc. Alternatively, an attacker might employ cross-site scripting vulnerabilities to execute malicious code on the client's system without needing to install any permanently resident malware.
Examples
Distributed computing projects such as SETI@home and the Great Internet Mersenne Prime Search, as well as Internet-dependent applications like Google Earth, rely primarily on client-side operations. They initiate a connection with the server (either in response to a user query, as with Google Earth, or in an automated fashion, as with SETI@home), and request some data. The server selects a data set (a server-side operation) and sends it back to the client. The client then analyzes the data (a client-side operation), and, when the analysis is complete, displays it to the user (as with Google Earth) and/or transmits the results of calculations back to the server (as with SETI@home).
In the context of the World Wide Web, commonly encountered computer languages which are evaluated or run on the client side include:
Cascading Style Sheets (CSS)
HTML
JavaScript
Early history
An early form of client–server architecture is remote job entry, dating at least to OS/360 (announced 1964), where the request was to run a job, and the response was the output.
While formulating the client–server model in the 1960s and 1970s, computer scientists building ARPANET (at the Stanford Research Institute) used the terms server-host (or serving host) and user-host (or using-host), and these appear in the early documents RFC 5 and RFC 4. This usage was continued at Xerox PARC in the mid-1970s.
One context in which researchers used these terms was in the design of a computer network programming language called Decode-Encode Language (DEL). The purpose of this language was to accept commands from one computer (the user-host), which would return status reports to the user as it encoded the commands in network packets. Another DEL-capable computer, the server-host, received the packets, decoded them, and returned formatted data to the user-host. A DEL program on the user-host received the results to present to the user. This is a client–server transaction. Development of DEL was just beginning in 1969, the year that the United States Department of Defense established ARPANET (predecessor of Internet).
Client-host and server-host
Client-host and server-host have subtly different meanings than client and server. A host is any computer connected to a network. Whereas the words server and client may refer either to a computer or to a computer program, server-host and client-host always refer to computers. The host is a versatile, multifunction computer; clients and servers are just programs that run on a host. In the client–server model, a server is more likely to be devoted to the task of serving.
An early use of the word client occurs in "Separating Data from Function in a Distributed File System", a 1978 paper by Xerox PARC computer scientists Howard Sturgis, James Mitchell, and Jay Israel. The authors are careful to define the term for readers, and explain that they use it to distinguish between the user and the user's network node (the client). By 1992, the word server had entered into general parlance.
Centralized computing
The client-server model does not dictate that server-hosts must have more resources than client-hosts. Rather, it enables any general-purpose computer to extend its capabilities by using the shared resources of other hosts. Centralized computing, however, specifically allocates a large number of resources to a small number of computers. The more computation is offloaded from client-hosts to the central computers, the simpler the client-hosts can be. It relies heavily on network resources (servers and infrastructure) for computation and storage. A diskless node loads even its operating system from the network, and a computer terminal has no operating system at all; it is only an input/output interface to the server. In contrast, a rich client, such as a personal computer, has many resources and does not rely on a server for essential functions.
As microcomputers decreased in price and increased in power from the 1980s to the late 1990s, many organizations transitioned computation from centralized servers, such as mainframes and minicomputers, to rich clients. This afforded greater, more individualized dominion over computer resources, but complicated information technology management. During the 2000s, web applications matured enough to rival application software developed for a specific microarchitecture. This maturation, more affordable mass storage, and the advent of service-oriented architecture were among the factors that gave rise to the cloud computing trend of the 2010s.
Comparison with peer-to-peer architecture
In addition to the client-server model, distributed computing applications often use the peer-to-peer (P2P) application architecture.
In the client-server model, the server is often designed to operate as a centralized system that serves many clients. The computing power, memory and storage requirements of a server must be scaled appropriately to the expected workload. Load-balancing and failover systems are often employed to scale the server beyond a single physical machine.
Load balancing is defined as the methodical and efficient distribution of network or application traffic across multiple servers in a server farm. Each load balancer sits between client devices and backend servers, receiving and then distributing incoming requests to any available server capable of fulfilling them.
In a peer-to-peer network, two or more computers (peers) pool their resources and communicate in a decentralized system. Peers are coequal, or equipotent nodes in a non-hierarchical network. Unlike clients in a client-server or client-queue-client network, peers communicate with each other directly. In peer-to-peer networking, an algorithm in the peer-to-peer communications protocol balances load, and even peers with modest resources can help to share the load. If a node becomes unavailable, its shared resources remain available as long as other peers offer it. Ideally, a peer does not need to achieve high availability because other, redundant peers make up for any resource downtime; as the availability and load capacity of peers change, the protocol reroutes requests.
Both client-server and master-slave are regarded as sub-categories of distributed peer-to-peer systems.
See also
Notes
Servers (computing)
Clients (computing)
Inter-process communication
Network architecture | Client–server model | [
"Engineering"
] | 3,143 | [
"Network architecture",
"Computer networks engineering"
] |
6,516 | https://en.wikipedia.org/wiki/Cosmological%20argument | In the philosophy of religion, a cosmological argument is an argument for the existence of God based upon observational and factual statements concerning the universe (or some general category of its natural contents) typically in the context of causation, change, contingency or finitude. In referring to reason and observation alone for its premises, and precluding revelation, this category of argument falls within the domain of natural theology. A cosmological argument can also sometimes be referred to as an argument from universal causation, an argument from first cause, the causal argument or the prime mover argument.
The concept of causation is a principal underpinning idea in all cosmological arguments, particularly in affirming the necessity for a First Cause. The latter is typically determined in philosophical analysis to be God, as identified within classical conceptions of theism.
The origins of the argument date back to at least Aristotle, developed subsequently within the scholarly traditions of Neoplatonism and early Christianity, and later under medieval Islamic scholasticism through the 9th to 12th centuries. It would eventually be re-introduced to Christian theology in the 13th century by Thomas Aquinas. In the 18th century, it would become associated with the principle of sufficient reason formulated by Gottfried Leibniz and Samuel Clarke, itself an exposition of the Parmenidean causal principle that "nothing comes from nothing".
Contemporary defenders of cosmological arguments include William Lane Craig, Robert Koons, John Lennox, Stephen Meyer, and Alexander Pruss.
History
Classical philosophy
Plato (c. 427–347 BC) and Aristotle (c. 384–322 BC) both posited first cause arguments, though each had certain notable caveats. In The Laws (Book X), Plato posited that all movement in the world and the Cosmos was "imparted motion". This required a "self-originated motion" to set it in motion and to maintain it. In Timaeus, Plato posited a "demiurge" of supreme wisdom and intelligence as the creator of the Cosmos.
Aristotle argued against the idea of a first cause, often confused with the idea of a "prime mover" or "unmoved mover" ( or primus motor) in his Physics and Metaphysics. Aristotle argued in favor of the idea of several unmoved movers, one powering each celestial sphere, which he believed lived beyond the sphere of the fixed stars, and explained why motion in the universe (which he believed was eternal) had continued for an infinite period of time. Aristotle argued the atomist's assertion of a non-eternal universe would require a first uncaused cause – in his terminology, an efficient first cause – an idea he considered a nonsensical flaw in the reasoning of the atomists.
Like Plato, Aristotle believed in an eternal cosmos with no beginning and no end (which in turn follows Parmenides' famous statement that "nothing comes from nothing"). In what he called "first philosophy" or metaphysics, Aristotle did intend a theological correspondence between the prime mover and a deity; functionally, however, he provided an explanation for the apparent motion of the "fixed stars" (now understood as the daily rotation of the Earth). According to his theses, immaterial unmoved movers are eternal unchangeable beings that constantly think about thinking, but being immaterial, they are incapable of interacting with the cosmos and have no knowledge of what transpires therein. From an "aspiration or desire", the celestial spheres, imitate that purely intellectual activity as best they can, by uniform circular motion. The unmoved movers inspiring the planetary spheres are no different in kind from the prime mover, they merely suffer a dependency of relation to the prime mover. Correspondingly, the motions of the planets are subordinate to the motion inspired by the prime mover in the sphere of fixed stars. Aristotle's natural theology admitted no creation or capriciousness from the immortal pantheon, but maintained a defense against dangerous charges of impiety.
Late antiquity to the Islamic Golden Age
Plotinus, a third-century Platonist, taught that the One transcendent absolute caused the universe to exist simply as a consequence of its existence (creatio ex deo). His disciple Proclus stated, "The One is God". In the 6th century, Syriac Christian neo-Platonist John Philoponus (c. 490–c. 570) examined the contradiction between Greek pagan adherences to the concept of a past-eternal world and Aristotelian rejection of the existence of actual infinities. Upon this foundation, he formulated arguments in defense of temporal finitism, which underpinned his arguments for the existence of God. Philosopher Steven M. Duncan notes that Philoponus's ideas eventually received their fullest articulation "at the hands of Muslim and Jewish exponents of kalam", or medieval Islamic scholasticism.
In the 11th century, Islamic philosopher Avicenna (c. 980–1037) inquired into the question of being, in which he distinguished between essence (māhiyya) and existence (wuǧūd). He argued that the fact of existence could not be inferred from or accounted for by the essence of existing things, and that form and matter by themselves could not originate and interact with the movement of the universe or the progressive actualization of existing things. Thus, he reasoned that existence must be due to an agent cause that necessitates, imparts, gives, or adds existence to an essence. To do so, the cause must coexist with its effect and be an existing thing.
Medieval Christian theology
Thomas Aquinas (c. 1225–1274) adapted and enhanced the argument he found in his reading of Aristotle, Avicenna (the Proof of the Truthful) and Maimonides to formulate one of the most influential versions of the cosmological argument. His conception of the first cause was the idea that the universe must be caused by something that is itself uncaused, which he claimed is 'that which we call God':
Importantly, Aquinas's Five Ways, given the second question of his Summa Theologica, are not the entirety of Aquinas's demonstration that the Christian God exists. The Five Ways form only the beginning of Aquinas's Treatise on the Divine Nature.
General principles
The infinite regress
A regress is a series of related elements, arranged in some type of sequence of succession, examined in backwards succession (regression) from a fixed point of reference. Depending on the type of regress, this retrograde examination may take the form of recursive analysis, in which the elements in a series are studied as products of prior, often simpler, elements. If there is no 'last member' in a regress (i.e. no 'first member' in the series) it becomes an infinite regress, continuing in perpetuity. In the context of the cosmological argument the term 'regress' usually refers to causal regress, in which the series is a chain of cause and effect, with each element in the series arising from causal activity of the prior member. Some variants of the argument may also refer to temporal regress, wherein the elements are past events (discrete units of time) arranged in a temporal sequence.
An infinite regress argument attempts to establish the falsity of a proposition by showing that it entails an infinite regress that is vicious. The cosmological argument is a type of positive infinite regress argument given that it defends a proposition (in this case, the existence of a first cause) by arguing that its negation would lead to a vicious regress. An infinite regress may be vicious due to various reasons:
Impossibility: Thought experiments such as Hilbert's Hotel are cited to demonstrate the metaphysical impossibility of actual infinities existing in reality. Accordingly, it may be argued that an infinite causal or temporal regress cannot occur in the real world.
Implausibility: The regress contradicts empirical evidence (e.g. for the finitude of the past) or basic principles such as Occam's razor.
Explanatory failure: A failure of explanatory goals resulting in an infinite regress of explanations. This may arise in the case of logical fallacies such as begging the question or from an attempt to investigate causes concerning origins or fundamental principles.
Accidental and essential ordering of causes
Aquinas refers to the distinction found in Aristotle's Physics (8.5) that a series of causes may either be accidental or essential, though the designation of this terminology would follow later under John Duns Scotus at the turn of the 14th century.
In an accidentally ordered series of causes, earlier members need not continue exerting causal activity (having done so to propagate the chain) for the series to continue. For example, in a generational line, ancestors need no longer exist for their offspring to continue the sequence of descent. In an essential series, prior members must maintain causal interrelationship for the series to continue: If a hand grips a stick that moves a rock along the ground, the rock would stop motion once the hand or stick ceases to exist.
Based upon this distinction Frederick Copleston (1907–1994) characterises two types of causation: Causes in fieri, which cause an effect's becoming, or coming into existence, and causes in esse, which causally sustain an effect, in being, once it exists.
Two specific properties of an essentially ordered series have significance in the context of the cosmological argument:
A first cause is essential: Later members exercise no independent causal power in continuing the series. In the example illustrated above, the rock derives its causal power essentially from the stick, which derives its causal power essentially from the hand.
All members in the causal series must exist simultaneously in time, or timelessly.
Thomistic philosopher, R. P. Phillips comments on the characteristics of essential ordering:
"Each member of the series of causes possesses being solely by virtue of the actual present operation of a superior cause ... Life is dependent inter alia on a certain atmospheric pressure, this again on the continual operation of physical forces, whose being and operation depends on the position of the earth in the solar system, which itself must endure relatively unchanged, a state of being which can only be continuously produced by a definite—if unknown—constitution of the material universe. This constitution, however, cannot be its own cause ... We are thus irresistibly led to posit a first efficient cause which, while itself uncaused, shall impart causality to a whole series."
Versions of the argument
Aquinas's argument from contingency
In the scholastic era, Aquinas formulated the "argument from contingency", following Aristotle, in claiming that there must be something to explain the existence of the universe. Since the universe could, under different circumstances, conceivably not exist (i.e. it is contingent) its existence must have a cause. This cause cannot be embodied in another contingent thing, but something that exists by necessity (i.e. that must exist in order for anything else to exist). It is a form of argument from universal causation, therefore compatible with the conception of a universe that has no beginning in time. In other words, according to Aquinas, even if the universe has always existed, it still owes its continuing existence to an uncaused cause, he states: "... and this we understand to be God."
Aquinas's argument from contingency is formulated as the Third Way (Q2, A3) in the Summa Theologica. It may be expressed as follows:
There exist contingent things, for which non-existence is possible.
It is impossible for contingent things to always exist, so at some time they did not exist.
Therefore, if all things are contingent, then nothing would exist now.
There exists something rather than nothing.
He concludes thereupon that contingent beings are an insufficient explanation for the existence of other contingent beings. Furthermore, that there must exist a necessary being, whose non-existence is impossible, to explain the origination of all contingent beings.
Therefore, there exists a necessary being.
It is possible that a necessary being has a cause of its necessity in another necessary being.
The derivation of necessity between beings cannot regress to infinity (being an essentially ordered causal series).
Therefore, there exists a being that is necessary of itself, from which all necessity derives.
That being is whom everyone calls God.
Leibnizian cosmological argument
In 1714, German philosopher Gottfried Leibniz presented a variation of the cosmological argument based upon the principle of sufficient reason. He writes: "There can be found no fact that is true or existent, or any true proposition, without there being a sufficient reason for its being so and not otherwise, although we cannot know these reasons in most cases." Stating his argument succinctly:
"Why is there something rather than nothing? The sufficient reason ... is found in a substance which ... is a necessary being bearing the reason for its existence within itself."
Alexander Pruss formulates the argument as follows:
Every contingent fact has an explanation.
There is a contingent fact that includes all other contingent facts.
Therefore, there is an explanation of this fact.
This explanation must involve a necessary being.
This necessary being is God.
Premise 1 expresses the principle of sufficient reason (PSR). In premise 2, Leibniz proposes the existence of a logical conjunction of all contingent facts, referred to in later literature as the Big Conjunctive Contingent Fact (BCCF), representing the sum total of contingent reality. Premise 3 applies the principle of sufficient reason to the BCCF, given that it too, as a contingency, has a sufficient explanation. It follows, in statement 4, that the explanation of the BCCF must be necessary, not contingent, given that the BCCF incorporates all contingent facts. Statement 5 proposes that the necessary being explaining the totality of contingent facts is God.
Philosophers such as Joshua Rasmussen and T. Ryan Byerly have argued in defence of the inference from statement 4 to statement 5.
Duns Scotus's metaphysical argument
At the turn of the 14th century, medieval Christian theologian Duns Scotus (1265/66–1308) formulated a metaphysical argument for the existence of God inspired by Aquinas's argument of the unmoved mover. Like other philosophers and theologians, Scotus believed that his statement for God's existence could be considered distinct to that of Aquinas. The form of the argument can be summarised as follows:
An effect cannot be produced by itself.
An effect cannot be produced by nothing.
A circle of causes is impossible.
Therefore, an effect must be produced by something else.
An accidentally ordered causal series cannot exist without an essentially ordered series.
Each member in an accidentally ordered series (except a possible first) exists via causal activity of a prior member.
That causal activity is exercised by virtue of a certain form.
Therefore, that form is required by each member to effect causation.
The form itself is not a member of the series.
Therefore [c,d], accidentally ordered causes cannot exist without higher-order (essentially ordered) causes.
An essentially ordered causal series cannot regress to infinity.
Therefore [4,5,6], there exists a first agent.
Scotus affirms, in premise 5, that an accidentally ordered series of causes is impossible without higher-order laws and processes that govern the basic nature of all causal activity, which he characterises as essentially ordered causes.
Premise 6 continues, in accordance with Aquinas's discourses on the Second Way and Third Way, that an essentially ordered series of causes cannot be an infinite regress. On this, Scotus posits that, if it is merely possible that a first agent exists, then it is necessarily true that a first agent exists, given that the non-existence of a first agent entails the impossibility of its own existence (by virtue of being a first cause in the chain). He argues further that it is not impossible for a being to exist that is causeless by virtue of ontological perfection.
With the formulation of this argument, Scotus establishes the first component of his 'triple primacy': The characterisation of a being that is first in efficient causality, final causality and pre-eminence, or maximal excellence, which he ascribes to God.
Kalam cosmological argument
The Kalam cosmological argument's central thesis is the impossibility of an infinite temporal regress of events (or past-infinite universe). Though a modern formulation that defends the finitude of the past through philosophical and scientific arguments, many of the argument's ideas originate in the writings of early Christian theologian John Philoponus (490–570 AD), developed within the proceedings of medieval Islamic scholasticism through the 9th to 12th centuries, eventually returning to Christian theological scholarship in the 13th century.
These ideas were revitalised for modern discourse by philosopher and theologian William Lane Craig through publications such as The Kalām Cosmological Argument (1979) and the Blackwell Companion to Natural Theology (2009). The form of the argument popularised by Craig is expressed in two parts, as an initial deductive syllogism followed by further philosophical analysis.
Initial syllogism
Everything that begins to exist has a cause.
The universe began to exist.
Therefore, the universe has a cause.
Conceptual analysis of the conclusion
Craig argues that the cause of the universe necessarily embodies specific properties in creating the universe ex nihilo and in effecting creation from a timeless state (implying free agency). Based upon this analysis, he appends a further premise and conclusion:
If the universe has a cause, then an uncaused, personal Creator of the universe exists who sans (without) the universe is beginningless, changeless, immaterial, timeless, spaceless and enormously powerful.
Therefore, an uncaused, personal Creator of the universe exists, who sans the universe is beginningless, changeless, immaterial, timeless, spaceless and enormously powerful.
For scientific evidence of the finitude of the past, Craig refers to the Borde-Guth-Vilenkin theorem, which posits a past boundary to cosmic inflation, and the general consensus on the standard model of cosmology, which refers to the origin of the universe in the Big Bang.
For philosophical evidence, he cites Hilbert's paradox of the grand hotel and Bertrand Russell's tale of Tristram Shandy to prove (respectively) the impossibility of actual infinites existing in reality and of forming an actual infinite by successive addition. He concludes that past events, in comprising a series of events that are instantiated in reality and formed by successive addition, cannot extend to an infinite past.
Craig remarks upon the theological implications that follow from the final conclusion of this argument:
"... our whole universe was caused to exist by something beyond it and greater than it. For it is no secret that one of the most important conceptions of what theists mean by 'God' is Creator of heaven and earth."
Criticism and discourse
"What caused the first cause?"
Objections to the cosmological argument may question why a first cause is unique in that it does not require any causes. Critics contend that the concept of a first cause qualifies as special pleading, or that arguing for the first cause's exemption raises the question of why the first cause is indeed exempt. Defenders maintain that this question is addressed by various formulations of the cosmological argument, emphasizing that none of its major iterations rests on the premise that everything requires a cause.
Andrew Loke refers to the Kalam cosmological argument, in which the causal premise ("whatever begins to exist has a cause") stipulates that only things which begin to exist require a cause. William Lane Craig asserts that—even if one posits a plurality of causes for the existence of the universe—a first uncaused cause is necessary, otherwise an infinite regress of causes would arise, which he argues is impossible. Similarly, Edward Feser proposes, in accordance with Aquinas's discourses on the Second Way, that an essentially ordered series of causes cannot regress to infinity, even if it may be theoretically possible for accidentally ordered causes to do so.
Various arguments have been presented to demonstrate the metaphysical impossibility of an actually infinite regress occurring in the real world, referring to thought experiments such as Hilbert's Hotel, the tale of Tristram Shandy, and variations.
"Why can't the universe be causeless?"
Various philosophers maintain that the Causal Principle is rooted in experience and therefore falls within the category of a posteriori knowledge. Notably, David Hume characterises causal relationship to not be truly a priori, therefore subject to the problem of induction. In contrast, William Lane Craig affirms that the principle is self-evidently true, predicated in the metaphysical intuition that nothing comes from nothing. He adds that, if false, it would be inexplicable why anything and everything does not randomly come into existence without a cause.
Whereas J. L. Mackie argues that cause and effect cannot be extrapolated to the origins of the universe based upon our inductive experiences and intellectual preferences, Craig maintains that causal laws are unrestricted metaphysical truths that are "not contingent upon the properties, causal powers, and dispositions of the natural kinds of substances which happen to exist".
"Why should the cause be God?"
Secular philosophers such as Michael Martin argue that a cosmological argument may establish the existence of a first cause, but falls short of identifying that cause as personal, or as God as defined within classical or other specific conceptions of theism.
Defenders of the argument note that most formulations, such as by Aquinas, Duns Scotus and Craig, employ conceptual analysis to establish the identity of the cause. In Aquinas's Summa Theologica, the Prima Pars (First Part) is devoted predominantly to establishing the attributes of the cause, such as uniqueness, perfection and intelligence. In Scotus's Ordinatio, his metaphysical argument is the first component of the 'triple primacy' through which he characterises the first cause as a being with the attributes of maximal excellence.
Timeless origin of the universe
In the topic of cosmic origins and the standard model of cosmology, the initial singularity of the Big Bang is postulated to be the point at which space and time, as well as all matter and energy, came into existence. J. Richard Gott and James E. Gunn assert that the question of "What was there before the Universe?" makes no sense and that the concept of before becomes meaningless when considering a timeless state. They add that questioning what occurred before the Big Bang is akin to questioning what is north of the North Pole.
Craig refers to Kant's postulate that a cause can be simultaneous with its effect, denoting that this is true of the moment of creation when time itself came into being. He affirms that the history of 20th century cosmology belies the proposition that researchers have no strong intuition to pursue a causal explanation of the origin of time and the universe. Accordingly, physicists have sought to examine the causal origins of the Big Bang by conjecturing such scenarios as the collision of membranes. Feser also notes that versions of the cosmological argument presented by classical philosophers do not require a commitment to the Big Bang, or even to a cosmic origin.
The Hume-Edwards principle
William L. Rowe characterises the Hume-Edwards principle, referring to arguments presented by David Hume, and later Paul Edwards, in their criticisms of the cosmological argument:
The principle stipulates that a causal series—even one that regresses to infinity—requires no explanatory causes beyond those that are members within that series. If every member of a series has a causal explanation within the sequence, the series in itself is explanatorily complete. Thus, it rejects arguments, such as by Duns Scotus, for the existence of higher-order, efficient causes that are responsible for the basic principles of causal interaction. Notably, it contradicts Hume's own Dialogues Concerning Natural Religion, in which the character Demea reflects that, even if a succession of causes is infinite, the whole chain still requires a cause.
Causal loop arguments
Some objections to the cosmological argument refer to the possibility of loops in the structure of cause and effect that would avoid the need for a first cause. Gott and Li refer to the curvature of spacetime and closed timelike curves as possible mechanisms by which the universe may bring about its own existence. Richard Hanley contends that causal loops are neither logically nor physically impossible, remarking: "[In timed systems] the only possibly objectionable feature that all causal loops share is that coincidence is required to explain them."
Andrew Loke argues that there is insufficient evidence to postulate a causal loop of the type that would avoid a first cause. He proposes that such a mechanism would suffer from the problem of vicious circularity, rendering it metaphysically impossible.
See also
Creatio ex nihilo
Ex nihilo nihil fit
Argument
Biblical cosmology
Chaos
Cosmogony
Creation myth
Dating Creation
Determinism
First Principle
First cause
Infinitism
Logos
Present
Psychology
Quinque viae
Semantics
Semiotics
Temporal finitism
Timeline of the Big Bang
Transtheism
Unmoved mover
References
External links
Arguments for the existence of God
Causality
Concepts in metaphysics
Philosophy of religion | Cosmological argument | [
"Physics"
] | 5,344 | [] |
6,543 | https://en.wikipedia.org/wiki/Carnivore | A carnivore , or meat-eater (Latin, caro, genitive carnis, meaning meat or "flesh" and vorare meaning "to devour"), is an animal or plant whose nutrition and energy requirements are met by consumption of animal tissues (mainly muscle, fat and other soft tissues) as food, whether through predation or scavenging.
Nomenclature
Mammal order
The technical term for mammals in the order Carnivora is carnivoran, and they are so-named because most member species in the group have a carnivorous diet, but the similarity of the name of the order and the name of the diet causes confusion.
Many but not all carnivorans are meat eaters; a few, such as the large and small cats (Felidae) are obligate carnivores (see below). Other classes of carnivore are highly variable. The ursids (bears), for example: while the Arctic polar bear eats meat almost exclusively (more than 90% of its diet is meat), almost all other bear species are omnivorous, and one species, the giant panda, is nearly exclusively herbivorous.
Dietary carnivory is not a distinguishing trait of the order. Many mammals with highly carnivorous diets are not members of the order Carnivora. Cetaceans, for example, all eat other animals, but are paradoxically members of the almost exclusively plant-eating hooved mammals.
Carnivorous diet
Animals that depend solely on animal flesh for their nutrient requirements in nature are called hypercarnivores or obligate carnivores, whilst those that also consume non-animal food are called mesocarnivores, or facultative carnivores, or omnivores (there are no clear distinctions). A carnivore at the top of the food chain (adults not preyed upon by other animals) is termed an apex predator, regardless of whether it is an obligate or facultative carnivore. In captivity or domestic settings, obligate carnivores like cats and crocodiles can, in principle, get all their required nutrients from processed food made from plant and synthetic sources.
Outside the animal kingdom, there are several genera containing carnivorous plants (predominantly insectivores) and several phyla containing carnivorous fungi (preying mostly on microscopic invertebrates, such as nematodes, amoebae, and springtails).
Subcategories of carnivory
Carnivores are sometimes characterized by their type of prey. For example, animals that eat mainly insects and similar terrestrial arthropods are called insectivores, while those that eat mainly soft-bodied invertebrates are called vermivores. Those that eat mainly fish are called piscivores.
Carnivores may alternatively be classified according to the percentage of meat in their diet. The diet of a hypercarnivore consists of more than 70% meat, that of a mesocarnivore 30–70%, and that of a hypocarnivore less than 30%, with the balance consisting of non-animal foods, such as fruits, other plant material, or fungi.
Omnivores also consume both animal and non-animal food, and apart from their more general definition, there is no clearly defined ratio of plant vs. animal material that distinguishes a facultative carnivore from an omnivore.
Obligate carnivores
Obligate or "true" carnivores are those whose diet requires nutrients found only in animal flesh in the wild. While obligate carnivores might be able to ingest small amounts of plant matter, they lack the necessary physiology required to fully digest it. Some obligate carnivorous mammals will ingest vegetation as an emetic, a food that upsets their stomachs, to self-induce vomiting.
Obligate carnivores are diverse. The amphibian axolotl consumes mainly worms and larvae in its environment, but if necessary will consume algae. All wild felids, including feral domestic cats, require a diet of primarily animal flesh and organs. Specifically, cats have high protein requirements and their metabolisms appear unable to synthesize essential nutrients such as retinol, arginine, taurine, and arachidonic acid; thus, in nature, they must consume flesh to supply these nutrients.
Characteristics of carnivores
Characteristics commonly associated with carnivores include strength, speed, and keen senses for hunting, as well as teeth and claws for capturing and tearing prey. However, some carnivores do not hunt and are scavengers, lacking the physical characteristics to bring down prey; in addition, most hunting carnivores will scavenge when the opportunity arises. Carnivores have comparatively short digestive systems, as they are not required to break down the tough cellulose found in plants.
Many hunting animals have evolved eyes facing forward, enabling depth perception. This is almost universal among mammalian predators, while most reptile and amphibian predators have eyes facing sideways.
Prehistory of carnivory
Predation (the eating of one living organism by another for nutrition) predates the rise of commonly recognized carnivores by hundreds of millions (perhaps billions) of years. It began with single-celled organisms that phagocytozed and digested other cells, and later evolved into multicellular organisms with specialized cells that were dedicated to breaking down other organisms. Incomplete digestion of the prey organisms, some of which survived inside the predators in a form of endosymbiosis, might have led to symbiogenesis that gave rise to eukaryotes and eukaryotic autotrophs such as green and red algae.
Proterozoic origin
The earliest predators were microorganisms, which engulfed and "swallowed" other smaller cells (i.e. phagocytosis) and digested them internally. Because the earliest fossil record is poor, these first predators could date back anywhere between 1 and over 2.7 bya (billion years ago).
The rise of eukaryotic cells at around 2.7 bya, the rise of multicellular organisms at about 2 bya, and the rise of motile predators (around 600 Mya – 2 bya, probably around 1 bya) have all been attributed to early predatory behavior, and many very early remains show evidence of boreholes or other markings attributed to small predator species.
The sudden disappearance of the precambrian Ediacaran biota at the end-Ediacaran extinction, who were mostly bottom-dwelling filter feeders and grazers, has been hypothetized to be partly caused by increased predation by newer animals with hardened skeleton and mouthparts.
Paleozoic
The degradation of seafloor microbial mats due to the Cambrian substrate revolution led to increased active predation among animals, likely triggering various evolutionary arms races that contributed to the rapid diversification during the Cambrian explosion. Radiodont arthropods, which produced the first apex predators such as Anomalocaris, quickly became the dominant carnivores of the Cambrian sea. After their decline due to the Cambrian-Ordovician extinction event, the niches of large carnivores were taken over by nautiloid cephalopods such as Cameroceras and later eurypterids such as Jaekelopterus during the Ordovician and Silurian periods.
The first vertebrate carnivores appeared after the evolution of jawed fish, especially armored placoderms such as the massive Dunkleosteus. The dominance of placoderms in the Devonian ocean forced other fish to venture into other niches, and one clade of bony fish, the lobe-finned fish, became the dominant carnivores of freshwater wetlands formed by early land plants. Some of these fish became better adapted for breathing air and eventually giving rise to amphibian tetrapods. These early tetrapods were large semi-aquatic piscivores and riparian ambush predators that hunt terrestrial arthropods (mainly arachnids and myriopods), and one group in particular, the temnospondyls, became terrestrial apex predators that hunt other tetrapods.
The dominance of temnospondyls around the wetland habitats throughout the Carboniferous forced other amphibians to evolve into amniotes that had adaptations that allowed them to live farther away from water bodies. These amniotes began to evolve both carnivory, which was a natural transition from insectivory requiring minimal adaptation; and herbivory, which took advantage of the abundance of coal forest foliage but in contrast required a complex set of adaptations that was necessary for digesting on the cellulose- and lignin-rich plant materials. After the Carboniferous rainforest collapse, both synapsid and sauropsid amniotes quickly gained dominance as the top terrestrial animals during the subsequent Permian period. Some scientists assert that sphenacodontoid synapsids such as Dimetrodon "were the first terrestrial vertebrate to develop the curved, serrated teeth that enable a predator to eat prey much larger than itself".
Mesozoic
In the Mesozoic, some theropod dinosaurs such as Tyrannosaurus rex are thought probably to have been obligate carnivores.
Though the theropods were the larger carnivores, several carnivorous mammal groups were already present. Most notable are the gobiconodontids, the triconodontid Jugulator, the deltatheroidans and Cimolestes. Many of these, such as Repenomamus, Jugulator and Cimolestes, were among the largest mammals in their faunal assemblages, capable of attacking dinosaurs.
Cenozoic
In the early-to-mid-Cenozoic, the dominant predator forms were mammals: hyaenodonts, oxyaenids, entelodonts, ptolemaiidans, arctocyonids and mesonychians, representing a great diversity of eutherian carnivores in the northern continents and Africa. In South America, sparassodonts were dominant, while Australia saw the presence of several marsupial predators, such as the dasyuromorphs and thylacoleonids. From the Miocene to the present, the dominant carnivorous mammals have been carnivoramorphs.
Most carnivorous mammals, from dogs to deltatheridiums, share several dental adaptations, such as carnassial teeth, long canines and even similar tooth replacement patterns. Most aberrant are thylacoleonids, with a diprodontan dentition completely unlike that of any other mammal; and eutriconodonts like gobiconodontids and Jugulator, with a three-cusp anatomy which nevertheless functioned similarly to carnassials.
See also
Mesocarnivore
References
Further reading
Biological interactions
Animals by eating behaviors
Ethology | Carnivore | [
"Biology"
] | 2,315 | [
"Behavior",
"Animals by eating behaviors",
"Biological interactions",
"Eating behaviors",
"Behavioural sciences",
"Carnivory",
"nan",
"Ethology"
] |
6,556 | https://en.wikipedia.org/wiki/Coprime%20integers | In number theory, two integers and are coprime, relatively prime or mutually prime if the only positive integer that is a divisor of both of them is 1. Consequently, any prime number that divides does not divide , and vice versa. This is equivalent to their greatest common divisor (GCD) being 1. One says also is prime to or is coprime with .
The numbers 8 and 9 are coprime, despite the fact that neither—considered individually—is a prime number, since 1 is their only common divisor. On the other hand, 6 and 9 are not coprime, because they are both divisible by 3. The numerator and denominator of a reduced fraction are coprime, by definition.
Notation and testing
When the integers and are coprime, the standard way of expressing this fact in mathematical notation is to indicate that their greatest common divisor is one, by the formula or . In their 1989 textbook Concrete Mathematics, Ronald Graham, Donald Knuth, and Oren Patashnik proposed an alternative notation to indicate that and are relatively prime and that the term "prime" be used instead of coprime (as in is prime to ).
A fast way to determine whether two numbers are coprime is given by the Euclidean algorithm and its faster variants such as binary GCD algorithm or Lehmer's GCD algorithm.
The number of integers coprime with a positive integer , between 1 and , is given by Euler's totient function, also known as Euler's phi function, .
A set of integers can also be called coprime if its elements share no common positive factor except 1. A stronger condition on a set of integers is pairwise coprime, which means that and are coprime for every pair of different integers in the set. The set is coprime, but it is not pairwise coprime since 2 and 4 are not relatively prime.
Properties
The numbers 1 and −1 are the only integers coprime with every integer, and they are the only integers that are coprime with 0.
A number of conditions are equivalent to and being coprime:
No prime number divides both and .
There exist integers such that (see Bézout's identity).
The integer has a multiplicative inverse modulo , meaning that there exists an integer such that . In ring-theoretic language, is a unit in the ring of integers modulo .
Every pair of congruence relations for an unknown integer , of the form and , has a solution (Chinese remainder theorem); in fact the solutions are described by a single congruence relation modulo .
The least common multiple of and is equal to their product , i.e. .
As a consequence of the third point, if and are coprime and , then . That is, we may "divide by " when working modulo . Furthermore, if are both coprime with , then so is their product (i.e., modulo it is a product of invertible elements, and therefore invertible); this also follows from the first point by Euclid's lemma, which states that if a prime number divides a product , then divides at least one of the factors .
As a consequence of the first point, if and are coprime, then so are any powers and .
If and are coprime and divides the product , then divides . This can be viewed as a generalization of Euclid's lemma.
The two integers and are coprime if and only if the point with coordinates in a Cartesian coordinate system would be "visible" via an unobstructed line of sight from the origin , in the sense that there is no point with integer coordinates anywhere on the line segment between the origin and . (See figure 1.)
In a sense that can be made precise, the probability that two randomly chosen integers are coprime is , which is about 61% (see , below).
Two natural numbers and are coprime if and only if the numbers and are coprime. As a generalization of this, following easily from the Euclidean algorithm in base :
Coprimality in sets
A set of integers can also be called coprime or setwise coprime if the greatest common divisor of all the elements of the set is 1. For example, the integers 6, 10, 15 are coprime because 1 is the only positive integer that divides all of them.
If every pair in a set of integers is coprime, then the set is said to be pairwise coprime (or pairwise relatively prime, mutually coprime or mutually relatively prime). Pairwise coprimality is a stronger condition than setwise coprimality; every pairwise coprime finite set is also setwise coprime, but the reverse is not true. For example, the integers 4, 5, 6 are (setwise) coprime (because the only positive integer dividing all of them is 1), but they are not pairwise coprime (because ).
The concept of pairwise coprimality is important as a hypothesis in many results in number theory, such as the Chinese remainder theorem.
It is possible for an infinite set of integers to be pairwise coprime. Notable examples include the set of all prime numbers, the set of elements in Sylvester's sequence, and the set of all Fermat numbers.
Coprimality in ring ideals
Two ideals and in a commutative ring are called coprime (or comaximal) if This generalizes Bézout's identity: with this definition, two principal ideals () and () in the ring of integers are coprime if and only if and are coprime. If the ideals and of are coprime, then furthermore, if is a third ideal such that contains , then contains . The Chinese remainder theorem can be generalized to any commutative ring, using coprime ideals.
Probability of coprimality
Given two randomly chosen integers and , it is reasonable to ask how likely it is that and are coprime. In this determination, it is convenient to use the characterization that and are coprime if and only if no prime number divides both of them (see Fundamental theorem of arithmetic).
Informally, the probability that any number is divisible by a prime (or in fact any integer) is for example, every 7th integer is divisible by 7. Hence the probability that two numbers are both divisible by is and the probability that at least one of them is not is Any finite collection of divisibility events associated to distinct primes is mutually independent. For example, in the case of two events, a number is divisible by primes and if and only if it is divisible by ; the latter event has probability If one makes the heuristic assumption that such reasoning can be extended to infinitely many divisibility events, one is led to guess that the probability that two numbers are coprime is given by a product over all primes,
Here refers to the Riemann zeta function, the identity relating the product over primes to is an example of an Euler product, and the evaluation of as is the Basel problem, solved by Leonhard Euler in 1735.
There is no way to choose a positive integer at random so that each positive integer occurs with equal probability, but statements about "randomly chosen integers" such as the ones above can be formalized by using the notion of natural density. For each positive integer , let be the probability that two randomly chosen numbers in are coprime. Although will never equal exactly, with work one can show that in the limit as the probability approaches .
More generally, the probability of randomly chosen integers being setwise coprime is
Generating all coprime pairs
All pairs of positive coprime numbers (with ) can be arranged in two disjoint complete ternary trees, one tree starting from (for even–odd and odd–even pairs), and the other tree starting from (for odd–odd pairs). The children of each vertex are generated as follows:
Branch 1:
Branch 2:
Branch 3:
This scheme is exhaustive and non-redundant with no invalid members. This can be proved by remarking that, if is a coprime pair with then
if then is a child of along branch 3;
if then is a child of along branch 2;
if then is a child of along branch 1.
In all cases is a "smaller" coprime pair with This process of "computing the father" can stop only if either or In these cases, coprimality, implies that the pair is either or
Another (much simpler) way to generate a tree of positive coprime pairs (with ) is by means of two generators and , starting with the root . The resulting binary tree, the Calkin–Wilf tree, is exhaustive and non-redundant, which can be seen as follows. Given a coprime pair one recursively applies or depending on which of them yields a positive coprime pair with . Since only one does, the tree is non-redundant. Since by this procedure one is bound to arrive at the root, the tree is exhaustive.
Applications
In machine design, an even, uniform gear wear is achieved by choosing the tooth counts of the two gears meshing together to be relatively prime. When a 1:1 gear ratio is desired, a gear relatively prime to the two equal-size gears may be inserted between them.
In pre-computer cryptography, some Vernam cipher machines combined several loops of key tape of different lengths. Many rotor machines combine rotors of different numbers of teeth. Such combinations work best when the entire set of lengths are pairwise coprime.
Generalizations
This concept can be extended to other algebraic structures than for example, polynomials whose greatest common divisor is 1 are called coprime polynomials.
See also
Euclid's orchard
Superpartient number
Notes
References
Further reading
.
Number theory | Coprime integers | [
"Mathematics"
] | 2,084 | [
"Discrete mathematics",
"Number theory"
] |
6,557 | https://en.wikipedia.org/wiki/Control%20unit | The control unit (CU) is a component of a computer's central processing unit (CPU) that directs the operation of the processor. A CU typically uses a binary decoder to convert coded instructions into timing and control signals that direct the operation of the other units (memory, arithmetic logic unit and input and output devices, etc.).
Most computer resources are managed by the CU. It directs the flow of data between the CPU and the other devices. John von Neumann included the control unit as part of the von Neumann architecture. In modern computer designs, the control unit is typically an internal part of the CPU with its overall role and operation unchanged since its introduction.
Multicycle control units
The simplest computers use a multicycle microarchitecture. These were the earliest designs. They are still popular in the very smallest computers, such as the embedded systems that operate machinery.
In a computer, the control unit often steps through the instruction cycle successively. This consists of fetching the instruction, fetching the operands, decoding the instruction, executing the instruction, and then writing the results back to memory. When the next instruction is placed in the control unit, it changes the behavior of the control unit to complete the instruction correctly. So, the bits of the instruction directly control the control unit, which in turn controls the computer.
The control unit may include a binary counter to tell the control unit's logic what step it should do.
Multicycle control units typically use both the rising and falling edges of their square-wave timing clock. They operate a step of their operation on each edge of the timing clock, so that a four-step operation completes in two clock cycles. This doubles the speed of the computer, given the same logic family.
Many computers have two different types of unexpected events. An interrupt occurs because some type of input or output needs software attention in order to operate correctly. An exception is caused by the computer's operation. One crucial difference is that the timing of an interrupt cannot be predicted. Another is that some exceptions (e.g. a memory-not-available exception) can be caused by an instruction that needs to be restarted.
Control units can be designed to handle interrupts in one of two typical ways. If a quick response is most important, a control unit is designed to abandon work to handle the interrupt. In this case, the work in process will be restarted after the last completed instruction. If the computer is to be very inexpensive, very simple, very reliable, or to get more work done, the control unit will finish the work in process before handling the interrupt. Finishing the work is inexpensive, because it needs no register to record the last finished instruction. It is simple and reliable because it has the fewest states. It also wastes the least amount of work.
Exceptions can be made to operate like interrupts in very simple computers. If virtual memory is required, then a memory-not-available exception must retry the failing instruction.
It is common for multicycle computers to use more cycles. Sometimes it takes longer to take a conditional jump, because the program counter has to be reloaded. Sometimes they do multiplication or division instructions by a process, something like binary long multiplication and division. Very small computers might do arithmetic, one or a few bits at a time. Some other computers have very complex instructions that take many steps.
Pipelined control units
Many medium-complexity computers pipeline instructions. This design is popular because of its economy and speed.
In a pipelined computer, instructions flow through the computer. This design has several stages. For example, it might have one stage for each step of the Von Neumann cycle. A pipelined computer usually has "pipeline registers" after each stage. These store the bits calculated by a stage so that the logic gates of the next stage can use the bits to do the next step.
It is common for even numbered stages to operate on one edge of the square-wave clock, while odd-numbered stages operate on the other edge. This speeds the computer by a factor of two compared to single-edge designs.
In a pipelined computer, the control unit arranges for the flow to start, continue, and stop as a program commands. The instruction data is usually passed in pipeline registers from one stage to the next, with a somewhat separated piece of control logic for each stage. The control unit also assures that the instruction in each stage does not harm the operation of instructions in other stages. For example, if two stages must use the same piece of data, the control logic assures that the uses are done in the correct sequence.
When operating efficiently, a pipelined computer will have an instruction in each stage. It is then working on all of those instructions at the same time. It can finish about one instruction for each cycle of its clock. When a program makes a decision, and switches to a different sequence of instructions, the pipeline sometimes must discard the data in process and restart. This is called a "stall." When two instructions could interfere, sometimes the control unit must stop processing a later instruction until an earlier instruction completes. This is called a "pipeline bubble" because a part of the pipeline is not processing instructions. Pipeline bubbles can occur when two instructions operate on the same register.
Interrupts and unexpected exceptions also stall the pipeline. If a pipelined computer abandons work for an interrupt, more work is lost than in a multicycle computer. Predictable exceptions do not need to stall. For example, if an exception instruction is used to enter the operating system, it does not cause a stall.
For the same speed of electronic logic, a pipelined computer can execute more instructions per second than a multicycle computer. Also, even though the electronic logic has a fixed maximum speed, a pipelined computer can be made faster or slower by varying the number of stages in the pipeline. With more stages, each stage does less work, and so the stage has fewer delays from the logic gates.
A pipelined model of a computer often has less logic gates per instruction per second than multicycle and out-of-order computers. This is because the average stage is less complex than a multicycle computer. An out-of-order computer usually has large amounts of idle logic at any given instant. Similar calculations usually show that a pipelined computer uses less energy per instruction.
However, a pipelined computer is usually more complex and more costly than a comparable multicycle computer. It typically has more logic gates, registers and a more complex control unit. In a like way, it might use more total energy, while using less energy per instruction. Out-of-order CPUs can usually do more instructions per second because they can do several instructions at once.
Preventing stalls
Control units use many methods to keep a pipeline full and avoid stalls. For example, even simple control units can assume that a backwards branch, to a lower-numbered, earlier instruction, is a loop, and will be repeated. So, a control unit with this design will always fill the pipeline with the backwards branch path. If a compiler can detect the most frequently-taken direction of a branch, the compiler can just produce instructions so that the most frequently taken branch is the preferred direction of branch. In a like way, a control unit might get hints from the compiler: Some computers have instructions that can encode hints from the compiler about the direction of branch.
Some control units do branch prediction: A control unit keeps an electronic list of the recent branches, encoded by the address of the branch instruction. This list has a few bits for each branch to remember the direction that was taken most recently.
Some control units can do speculative execution, in which a computer might have two or more pipelines, calculate both directions of a branch, and then discard the calculations of the unused direction.
Results from memory can become available at unpredictable times because very fast computers cache memory. That is, they copy limited amounts of memory data into very fast memory. The CPU must be designed to process at the very fast speed of the cache memory. Therefore, the CPU might stall when it must access main memory directly. In modern PCs, main memory is as much as three hundred times slower than cache.
To help this, out-of-order CPUs and control units were developed to process data as it becomes available. (See next section)
But what if all the calculations are complete, but the CPU is still stalled, waiting for main memory? Then, a control unit can switch to an alternative thread of execution whose data has been fetched while the thread was idle. A thread has its own program counter, a stream of instructions and a separate set of registers. Designers vary the number of threads depending on current memory technologies and the type of computer. Typical computers such as PCs and smart phones usually have control units with a few threads, just enough to keep busy with affordable memory systems. Database computers often have about twice as many threads, to keep their much larger memories busy. Graphic processing units (GPUs) usually have hundreds or thousands of threads, because they have hundreds or thousands of execution units doing repetitive graphic calculations.
When a control unit permits threads, the software also has to be designed to handle them. In general-purpose CPUs like PCs and smartphones, the threads are usually made to look very like normal time-sliced processes. At most, the operating system might need some awareness of them. In GPUs, the thread scheduling usually cannot be hidden from the application software, and is often controlled with a specialized subroutine library.
Out of order control units
A control unit can be designed to finish what it can. If several instructions can be completed at the same time, the control unit will arrange it. So, the fastest computers can process instructions in a sequence that can vary somewhat, depending on when the operands or instruction destinations become available. Most supercomputers and many PC CPUs use this method. The exact organization of this type of control unit depends on the slowest part of the computer.
When the execution of calculations is the slowest, instructions flow from memory into pieces of electronics called "issue units." An issue unit holds an instruction until both its operands and an execution unit are available. Then, the instruction and its operands are "issued" to an execution unit. The execution unit does the instruction. Then the resulting data is moved into a queue of data to be written back to memory or registers. If the computer has multiple execution units, it can usually do several instructions per clock cycle.
It is common to have specialized execution units. For example, a modestly priced computer might have only one floating-point execution unit, because floating point units are expensive. The same computer might have several integer units, because these are relatively inexpensive, and can do the bulk of instructions.
One kind of control unit for issuing uses an array of electronic logic, a "scoreboard" that detects when an instruction can be issued. The "height" of the array is the number of execution units, and the "length" and "width" are each the number of sources of operands. When all the items come together, the signals from the operands and execution unit will cross. The logic at this intersection detects that the instruction can work, so the instruction is "issued" to the free execution unit. An alternative style of issuing control unit implements the Tomasulo algorithm, which reorders a hardware queue of instructions. In some sense, both styles utilize a queue. The scoreboard is an alternative way to encode and reorder a queue of instructions, and some designers call it a queue table.
With some additional logic, a scoreboard can compactly combine execution reordering, register renaming and precise exceptions and interrupts. Further it can do this without the power-hungry, complex content-addressable memory used by the Tomasulo algorithm.
If the execution is slower than writing the results, the memory write-back queue always has free entries. But what if the memory writes slowly? Or what if the destination register will be used by an "earlier" instruction that has not yet issued? Then the write-back step of the instruction might need to be scheduled. This is sometimes called "retiring" an instruction. In this case, there must be scheduling logic on the back end of execution units. It schedules access to the registers or memory that will get the results.
Retiring logic can also be designed into an issuing scoreboard or a Tomasulo queue, by including memory or register access in the issuing logic.
Out of order controllers require special design features to handle interrupts. When there are several instructions in progress, it is not clear where in the instruction stream an interrupt occurs. For input and output interrupts, almost any solution works. However, when a computer has virtual memory, an interrupt occurs to indicate that a memory access failed. This memory access must be associated with an exact instruction and an exact processor state, so that the processor's state can be saved and restored by the interrupt. A usual solution preserves copies of registers until a memory access completes.
Also, out of order CPUs have even more problems with stalls from branching, because they can complete several instructions per clock cycle, and usually have many instructions in various stages of progress. So, these control units might use all of the solutions used by pipelined processors.
Translating control units
Some computers translate each single instruction into a sequence of simpler instructions. The advantage is that an out of order computer can be simpler in the bulk of its logic, while handling complex multi-step instructions. x86 Intel CPUs since the Pentium Pro translate complex CISC x86 instructions to more RISC-like internal micro-operations.
In these, the "front" of the control unit manages the translation of instructions. Operands are not translated. The "back" of the CU is an out-of-order CPU that issues the micro-operations and operands to the execution units and data paths.
Control units for low-powered computers
Many modern computers have controls that minimize power usage. In battery-powered computers, such as those in cell-phones, the advantage is longer battery life. In computers with utility power, the justification is to reduce the cost of power, cooling or noise.
Most modern computers use CMOS logic. CMOS wastes power in two common ways: By changing state, i.e. "active power", and by unintended leakage. The active power of a computer can be reduced by turning off control signals. Leakage current can be reduced by reducing the electrical pressure, the voltage, making the transistors with larger depletion regions or turning off the logic completely.
Active power is easier to reduce because data stored in the logic is not affected. The usual method reduces the CPU's clock rate. Most computer systems use this method. It is common for a CPU to idle during the transition to avoid side-effects from the changing clock.
Most computers also have a "halt" instruction. This was invented to stop non-interrupt code so that interrupt code has reliable timing. However, designers soon noticed that a halt instruction was also a good time to turn off a CPU's clock completely, reducing the CPU's active power to zero. The interrupt controller might continue to need a clock, but that usually uses much less power than the CPU.
These methods are relatively easy to design, and became so common that others were invented for commercial advantage. Many modern low-power CMOS CPUs stop and start specialized execution units and bus interfaces depending on the needed instruction. Some computers even arrange the CPU's microarchitecture to use transfer-triggered multiplexers so that each instruction only utilises the exact pieces of logic needed.
One common method is to spread the load to many CPUs, and turn off unused CPUs as the load reduces. The operating system's task switching logic saves the CPUs' data to memory. In some cases, one of the CPUs can be simpler and smaller, literally with fewer logic gates. So, it has low leakage, and it is the last to be turned off, and the first to be turned on. Also it then is the only CPU that requires special low-power features. A similar method is used in most PCs, which usually have an auxiliary embedded CPU that manages the power system. However, in PCs, the software is usually in the BIOS, not the operating system.
Theoretically, computers at lower clock speeds could also reduce leakage by reducing the voltage of the power supply. This affects the reliability of the computer in many ways, so the engineering is expensive, and it is uncommon except in relatively expensive computers such as PCs or cellphones.
Some designs can use very low leakage transistors, but these usually add cost. The depletion barriers of the transistors can be made larger to have less leakage, but this makes the transistor larger and thus both slower and more expensive. Some vendors use this technique in selected portions of an IC by constructing low leakage logic from large transistors that some processes provide for analog circuits. Some processes place the transistors above the surface of the silicon, in "fin fets", but these processes have more steps, so are more expensive. Special transistor doping materials (e.g. hafnium) can also reduce leakage, but this adds steps to the processing, making it more expensive. Some semiconductors have a larger band-gap than silicon. However, these materials and processes are currently (2020) more expensive than silicon.
Managing leakage is more difficult, because before the logic can be turned-off, the data in it must be moved to some type of low-leakage storage.
Some CPUs make use of a special type of flip-flop (to store a bit) that couples a fast, high-leakage storage cell to a slow, large (expensive) low-leakage cell. These two cells have separated power supplies. When the CPU enters a power saving mode (e.g. because of a halt that waits for an interrupt), data is transferred to the low-leakage cells, and the others are turned off. When the CPU leaves a low-leakage mode (e.g. because of an interrupt), the process is reversed.
Older designs would copy the CPU state to memory, or even disk, sometimes with specialized software. Very simple embedded systems sometimes just restart.
Integrating with the Computer
All modern CPUs have control logic to attach the CPU to the rest of the computer. In modern computers, this is usually a bus controller. When an instruction reads or writes memory, the control unit either controls the bus directly, or controls a bus controller. Many modern computers use the same bus interface for memory, input and output. This is called "memory-mapped I/O". To a programmer, the registers of the I/O devices appear as numbers at specific memory addresses. x86 PCs use an older method, a separate I/O bus accessed by I/O instructions.
A modern CPU also tends to include an interrupt controller. It handles interrupt signals from the system bus. The control unit is the part of the computer that responds to the interrupts.
There is often a cache controller to cache memory. The cache controller and the associated cache memory is often the largest physical part of a modern, higher-performance CPU. When the memory, bus or cache is shared with other CPUs, the control logic must communicate with them to assure that no computer ever gets out-of-date old data.
Many historic computers built some type of input and output directly into the control unit. For example, many historic computers had a front panel with switches and lights directly controlled by the control unit. These let a programmer directly enter a program and debug it. In later production computers, the most common use of a front panel was to enter a small bootstrap program to read the operating system from disk. This was annoying. So, front panels were replaced by bootstrap programs in read-only memory.
Most PDP-8 models had a data bus designed to let I/O devices borrow the control unit's memory read and write logic. This reduced the complexity and expense of high speed I/O controllers, e.g. for disk.
The Xerox Alto had a multitasking microprogrammable control unit that performed almost all I/O. This design provided most of the features of a modern PC with only a tiny fraction of the electronic logic. The dual-thread computer was run by the two lowest-priority microthreads. These performed calculations whenever I/O was not required. High priority microthreads provided (in decreasing priority) video, network, disk, a periodic timer, mouse, and keyboard. The microprogram did the complex logic of the I/O device, as well as the logic to integrate the device with the computer. For the actual hardware I/O, the microprogram read and wrote shift registers for most I/O, sometimes with resistor networks and transistors to shift output voltage levels (e.g. for video). To handle outside events, the microcontroller had microinterrupts to switch threads at the end of a thread's cycle, e.g. at the end of an instruction, or after a shift-register was accessed. The microprogram could be rewritten and reinstalled, which was very useful for a research computer.
Functions of the control unit
Thus a program of instructions in memory will cause the CU to configure a CPU's data flows to manipulate the data correctly between instructions. This results in a computer that could run a complete program and require no human intervention to make hardware changes between instructions (as had to be done when using only punch cards for computations before stored programmed computers with CUs were invented).
Hardwired control unit
Hardwired control units are implemented through use of combinational logic units, featuring a finite number of gates that can generate specific results based on the instructions that were used to invoke those responses. Hardwired control units are generally faster than the microprogrammed designs.
This design uses a fixed architecture—it requires changes in the wiring if the instruction set is modified or changed. It can be convenient for simple, fast computers.
A controller that uses this approach can operate at high speed; however, it has little flexibility. A complex instruction set can overwhelm a designer who uses ad hoc logic design.
The hardwired approach has become less popular as computers have evolved. Previously, control units for CPUs used ad hoc logic, and they were difficult to design.
Microprogram control unit
The idea of microprogramming was introduced by Maurice Wilkes in 1951 as an intermediate level to execute computer program instructions. Microprograms were organized as a sequence of microinstructions and stored in special control memory. The algorithm for the microprogram control unit, unlike the hardwired control unit, is usually specified by flowchart description. The main advantage of a microprogrammed control unit is the simplicity of its structure. Outputs from the controller are by microinstructions. The microprogram can be debugged and replaced similarly to software.
Combination methods of design
A popular variation on microcode is to debug the microcode using a software simulator. Then, the microcode is a table of bits. This is a logical truth table, that translates a microcode address into the control unit outputs. This truth table can be fed to a computer program that produces optimized electronic logic. The resulting control unit is almost as easy to design as microprogramming, but it has the fast speed and low number of logic elements of a hard wired control unit. The practical result resembles a Mealy machine or Richards controller.
See also
Processor design
Computer architecture
Richards controller
Controller (computing)
References
Central processing unit
Digital electronics | Control unit | [
"Engineering"
] | 4,881 | [
"Electronic engineering",
"Digital electronics"
] |
6,562 | https://en.wikipedia.org/wiki/Conditional%20proof | A conditional proof is a proof that takes the form of asserting a conditional, and proving that the antecedent of the conditional necessarily leads to the consequent.
Overview
The assumed antecedent of a conditional proof is called the conditional proof assumption (CPA). Thus, the goal of a conditional proof is to demonstrate that if the CPA were true, then the desired conclusion necessarily follows. The validity of a conditional proof does not require that the CPA be true, only that if it were true it would lead to the consequent.
Conditional proofs are of great importance in mathematics. Conditional proofs exist linking several otherwise unproven conjectures, so that a proof of one conjecture may immediately imply the validity of several others. It can be much easier to show a proposition's truth to follow from another proposition than to prove it independently.
A famous network of conditional proofs is the NP-complete class of complexity theory. There is a large number of interesting tasks (see List of NP-complete problems), and while it is not known if a polynomial-time solution exists for any of them, it is known that if such a solution exists for some of them, one exists for all of them. Similarly, the Riemann hypothesis has many consequences already proven.
Symbolic logic
As an example of a conditional proof in symbolic logic, suppose we want to prove A → C (if A, then C) from the first two premises below:
See also
Deduction theorem
Logical consequence
Propositional calculus
References
Robert L. Causey, Logic, sets, and recursion, Jones and Barlett, 2006.
Dov M. Gabbay, Franz Guenthner (eds.), Handbook of philosophical logic, Volume 8, Springer, 2002.
Logic
Conditionals
Mathematical proofs
Methods of proof | Conditional proof | [
"Mathematics"
] | 367 | [
"Methods of proof",
"nan",
"Proof theory"
] |
6,563 | https://en.wikipedia.org/wiki/Conjunction%20introduction | Conjunction introduction (often abbreviated simply as conjunction and also called and introduction or adjunction) is a valid rule of inference of propositional logic. The rule makes it possible to introduce a conjunction into a logical proof. It is the inference that if the proposition is true, and the proposition is true, then the logical conjunction of the two propositions and is true. For example, if it is true that "it is raining", and it is true that "the cat is inside", then it is true that "it is raining and the cat is inside". The rule can be stated:
where the rule is that wherever an instance of "" and "" appear on lines of a proof, a "" can be placed on a subsequent line.
Formal notation
The conjunction introduction rule may be written in sequent notation:
where and are propositions expressed in some formal system, and is a metalogical symbol meaning that is a syntactic consequence if and are each on lines of a proof in some logical system;
References
Rules of inference
Theorems in propositional logic | Conjunction introduction | [
"Mathematics"
] | 217 | [
"Rules of inference",
"Theorems in propositional logic",
"Theorems in the foundations of mathematics",
"Proof theory"
] |
6,596 | https://en.wikipedia.org/wiki/Computer%20vision | Computer vision tasks include methods for acquiring, processing, analyzing, and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the form of decisions. "Understanding" in this context signifies the transformation of visual images (the input to the retina) into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.
The scientific discipline of computer vision is concerned with the theory behind artificial systems that extract information from images. Image data can take many forms, such as video sequences, views from multiple cameras, multi-dimensional data from a 3D scanner, 3D point clouds from LiDaR sensors, or medical scanning devices. The technological discipline of computer vision seeks to apply its theories and models to the construction of computer vision systems.
Subdisciplines of computer vision include scene reconstruction, object detection, event detection, activity recognition, video tracking, object recognition, 3D pose estimation, learning, indexing, motion estimation, visual servoing, 3D scene modeling, and image restoration.
Definition
Computer vision is an interdisciplinary field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do. "Computer vision is concerned with the automatic extraction, analysis, and understanding of useful information from a single image or a sequence of images. It involves the development of a theoretical and algorithmic basis to achieve automatic visual understanding." As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models for the construction of computer vision systems. Machine vision refers to a systems engineering discipline, especially in the context of factory automation. In more recent times, the terms computer vision and machine vision have converged to a greater degree.
History
In the late 1960s, computer vision began at universities that were pioneering artificial intelligence. It was meant to mimic the human visual system as a stepping stone to endowing robots with intelligent behavior. In 1966, it was believed that this could be achieved through an undergraduate summer project, by attaching a camera to a computer and having it "describe what it saw".
What distinguished computer vision from the prevalent field of digital image processing at that time was a desire to extract three-dimensional structure from images with the goal of achieving full scene understanding. Studies in the 1970s formed the early foundations for many of the computer vision algorithms that exist today, including extraction of edges from images, labeling of lines, non-polyhedral and polyhedral modeling, representation of objects as interconnections of smaller structures, optical flow, and motion estimation.
The next decade saw studies based on more rigorous mathematical analysis and quantitative aspects of computer vision. These include the concept of scale-space, the inference of shape from various cues such as shading, texture and focus, and contour models known as snakes. Researchers also realized that many of these mathematical concepts could be treated within the same optimization framework as regularization and Markov random fields.
By the 1990s, some of the previous research topics became more active than others. Research in projective 3-D reconstructions led to better understanding of camera calibration. With the advent of optimization methods for camera calibration, it was realized that a lot of the ideas were already explored in bundle adjustment theory from the field of photogrammetry. This led to methods for sparse 3-D reconstructions of scenes from multiple images. Progress was made on the dense stereo correspondence problem and further multi-view stereo techniques. At the same time, variations of graph cut were used to solve image segmentation. This decade also marked the first time statistical learning techniques were used in practice to recognize faces in images (see Eigenface). Toward the end of the 1990s, a significant change came about with the increased interaction between the fields of computer graphics and computer vision. This included image-based rendering, image morphing, view interpolation, panoramic image stitching and early light-field rendering.
Recent work has seen the resurgence of feature-based methods used in conjunction with machine learning techniques and complex optimization frameworks.
The advancement of Deep Learning techniques has brought further life to the field of computer vision. The accuracy of deep learning algorithms on several benchmark computer vision data sets for tasks ranging from classification, segmentation and optical flow has surpassed prior methods.
Related fields
Solid-state physics
Solid-state physics is another field that is closely related to computer vision. Most computer vision systems rely on image sensors, which detect electromagnetic radiation, which is typically in the form of either visible, infrared or ultraviolet light. The sensors are designed using quantum physics. The process by which light interacts with surfaces is explained using physics. Physics explains the behavior of optics which are a core part of most imaging systems. Sophisticated image sensors even require quantum mechanics to provide a complete understanding of the image formation process. Also, various measurement problems in physics can be addressed using computer vision, for example, motion in fluids.
Neurobiology
Neurobiology has greatly influenced the development of computer vision algorithms. Over the last century, there has been an extensive study of eyes, neurons, and brain structures devoted to the processing of visual stimuli in both humans and various animals. This has led to a coarse yet convoluted description of how natural vision systems operate in order to solve certain vision-related tasks. These results have led to a sub-field within computer vision where artificial systems are designed to mimic the processing and behavior of biological systems at different levels of complexity. Also, some of the learning-based methods developed within computer vision (e.g. neural net and deep learning based image and feature analysis and classification) have their background in neurobiology. The Neocognitron, a neural network developed in the 1970s by Kunihiko Fukushima, is an early example of computer vision taking direct inspiration from neurobiology, specifically the primary visual cortex.
Some strands of computer vision research are closely related to the study of biological vision—indeed, just as many strands of AI research are closely tied with research into human intelligence and the use of stored knowledge to interpret, integrate, and utilize visual information. The field of biological vision studies and models the physiological processes behind visual perception in humans and other animals. Computer vision, on the other hand, develops and describes the algorithms implemented in software and hardware behind artificial vision systems. An interdisciplinary exchange between biological and computer vision has proven fruitful for both fields.
Signal processing
Yet another field related to computer vision is signal processing. Many methods for processing one-variable signals, typically temporal signals, can be extended in a natural way to the processing of two-variable signals or multi-variable signals in computer vision. However, because of the specific nature of images, there are many methods developed within computer vision that have no counterpart in the processing of one-variable signals. Together with the multi-dimensionality of the signal, this defines a subfield in signal processing as a part of computer vision.
Robotic navigation
Robot navigation sometimes deals with autonomous path planning or deliberation for robotic systems to navigate through an environment. A detailed understanding of these environments is required to navigate through them. Information about the environment could be provided by a computer vision system, acting as a vision sensor and providing high-level information about the environment and the robot
Visual computing
Other fields
Besides the above-mentioned views on computer vision, many of the related research topics can also be studied from a purely mathematical point of view. For example, many methods in computer vision are based on statistics, optimization or geometry. Finally, a significant part of the field is devoted to the implementation aspect of computer vision; how existing methods can be realized in various combinations of software and hardware, or how these methods can be modified in order to gain processing speed without losing too much performance. Computer vision is also used in fashion eCommerce, inventory management, patent search, furniture, and the beauty industry.
Distinctions
The fields most closely related to computer vision are image processing, image analysis and machine vision. There is a significant overlap in the range of techniques and applications that these cover. This implies that the basic techniques that are used and developed in these fields are similar, something which can be interpreted as there is only one field with different names. On the other hand, it appears to be necessary for research groups, scientific journals, conferences, and companies to present or market themselves as belonging specifically to one of these fields and, hence, various characterizations which distinguish each of the fields from the others have been presented. In image processing, the input is an image and the output is an image as well, whereas in computer vision, an image or a video is taken as an input and the output could be an enhanced image, an understanding of the content of an image or even behavior of a computer system based on such understanding.
Computer graphics produces image data from 3D models, and computer vision often produces 3D models from image data. There is also a trend towards a combination of the two disciplines, e.g., as explored in augmented reality.
The following characterizations appear relevant but should not be taken as universally accepted:
Image processing and image analysis tend to focus on 2D images, how to transform one image to another, e.g., by pixel-wise operations such as contrast enhancement, local operations such as edge extraction or noise removal, or geometrical transformations such as rotating the image. This characterization implies that image processing/analysis neither requires assumptions nor produces interpretations about the image content.
Computer vision includes 3D analysis from 2D images. This analyzes the 3D scene projected onto one or several images, e.g., how to reconstruct structure or other information about the 3D scene from one or several images. Computer vision often relies on more or less complex assumptions about the scene depicted in an image.
Machine vision is the process of applying a range of technologies and methods to provide imaging-based automatic inspection, process control, and robot guidance in industrial applications. Machine vision tends to focus on applications, mainly in manufacturing, e.g., vision-based robots and systems for vision-based inspection, measurement, or picking (such as bin picking). This implies that image sensor technologies and control theory often are integrated with the processing of image data to control a robot and that real-time processing is emphasized by means of efficient implementations in hardware and software. It also implies that external conditions such as lighting can be and are often more controlled in machine vision than they are in general computer vision, which can enable the use of different algorithms.
There is also a field called imaging which primarily focuses on the process of producing images, but sometimes also deals with the processing and analysis of images. For example, medical imaging includes substantial work on the analysis of image data in medical applications. Progress in convolutional neural networks (CNNs) has improved the accurate detection of disease in medical images, particularly in cardiology, pathology, dermatology, and radiology.
Finally, pattern recognition is a field that uses various methods to extract information from signals in general, mainly based on statistical approaches and artificial neural networks. A significant part of this field is devoted to applying these methods to image data.
Photogrammetry also overlaps with computer vision, e.g., stereophotogrammetry vs. computer stereo vision.
Applications
Applications range from tasks such as industrial machine vision systems which, say, inspect bottles speeding by on a production line, to research into artificial intelligence and computers or robots that can comprehend the world around them. The computer vision and machine vision fields have significant overlap. Computer vision covers the core technology of automated image analysis which is used in many fields. Machine vision usually refers to a process of combining automated image analysis with other methods and technologies to provide automated inspection and robot guidance in industrial applications. In many computer-vision applications, computers are pre-programmed to solve a particular task, but methods based on learning are now becoming increasingly common. Examples of applications of computer vision include systems for:
Automatic inspection, e.g., in manufacturing applications;
Assisting humans in identification tasks, e.g., a species identification system;
Controlling processes, e.g., an industrial robot;
Detecting events, e.g., for visual surveillance or people counting, e.g., in the restaurant industry;
Interaction, e.g., as the input to a device for computer-human interaction;
monitoring agricultural crops, e.g. an open-source vision transformers model has been developed to help farmers automatically detect strawberry diseases with 98.4% accuracy.
Modeling objects or environments, e.g., medical image analysis or topographical modeling;
Navigation, e.g., by an autonomous vehicle or mobile robot;
Organizing information, e.g., for indexing databases of images and image sequences.
Tracking surfaces or planes in 3D coordinates for allowing Augmented Reality experiences.
Medicine
One of the most prominent application fields is medical computer vision, or medical image processing, characterized by the extraction of information from image data to diagnose a patient. An example of this is the detection of tumours, arteriosclerosis or other malign changes, and a variety of dental pathologies; measurements of organ dimensions, blood flow, etc. are another example. It also supports medical research by providing new information: e.g., about the structure of the brain or the quality of medical treatments. Applications of computer vision in the medical area also include enhancement of images interpreted by humans—ultrasonic images or X-ray images, for example—to reduce the influence of noise.
Machine vision
A second application area in computer vision is in industry, sometimes called machine vision, where information is extracted for the purpose of supporting a production process. One example is quality control where details or final products are being automatically inspected in order to find defects. One of the most prevalent fields for such inspection is the Wafer industry in which every single Wafer is being measured and inspected for inaccuracies or defects to prevent a computer chip from coming to market in an unusable manner. Another example is a measurement of the position and orientation of details to be picked up by a robot arm. Machine vision is also heavily used in the agricultural processes to remove undesirable foodstuff from bulk material, a process called optical sorting.
Military
Military applications are probably one of the largest areas of computer vision. The obvious examples are the detection of enemy soldiers or vehicles and missile guidance. More advanced systems for missile guidance send the missile to an area rather than a specific target, and target selection is made when the missile reaches the area based on locally acquired image data. Modern military concepts, such as "battlefield awareness", imply that various sensors, including image sensors, provide a rich set of information about a combat scene that can be used to support strategic decisions. In this case, automatic processing of the data is used to reduce complexity and to fuse information from multiple sensors to increase reliability.
Autonomous vehicles
One of the newer application areas is autonomous vehicles, which include submersibles, land-based vehicles (small robots with wheels, cars, or trucks), aerial vehicles, and unmanned aerial vehicles (UAV). The level of autonomy ranges from fully autonomous (unmanned) vehicles to vehicles where computer-vision-based systems support a driver or a pilot in various situations. Fully autonomous vehicles typically use computer vision for navigation, e.g., for knowing where they are or mapping their environment (SLAM), for detecting obstacles. It can also be used for detecting certain task-specific events, e.g., a UAV looking for forest fires. Examples of supporting systems are obstacle warning systems in cars, cameras and LiDAR sensors in vehicles, and systems for autonomous landing of aircraft. Several car manufacturers have demonstrated systems for autonomous driving of cars. There are ample examples of military autonomous vehicles ranging from advanced missiles to UAVs for recon missions or missile guidance. Space exploration is already being made with autonomous vehicles using computer vision, e.g., NASA's Curiosity and CNSA's Yutu-2 rover.
Tactile feedback
Materials such as rubber and silicon are being used to create sensors that allow for applications such as detecting microundulations and calibrating robotic hands. Rubber can be used in order to create a mold that can be placed over a finger, inside of this mold would be multiple strain gauges. The finger mold and sensors could then be placed on top of a small sheet of rubber containing an array of rubber pins. A user can then wear the finger mold and trace a surface. A computer can then read the data from the strain gauges and measure if one or more of the pins are being pushed upward. If a pin is being pushed upward then the computer can recognize this as an imperfection in the surface. This sort of technology is useful in order to receive accurate data on imperfections on a very large surface. Another variation of this finger mold sensor are sensors that contain a camera suspended in silicon. The silicon forms a dome around the outside of the camera and embedded in the silicon are point markers that are equally spaced. These cameras can then be placed on devices such as robotic hands in order to allow the computer to receive highly accurate tactile data.
Other application areas include:
Support of visual effects creation for cinema and broadcast, e.g., camera tracking (match moving).
Surveillance.
Driver drowsiness detection
Tracking and counting organisms in the biological sciences
Typical tasks
Each of the application areas described above employ a range of computer vision tasks; more or less well-defined measurement problems or processing problems, which can be solved using a variety of methods. Some examples of typical computer vision tasks are presented below.
Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions. Understanding in this context means the transformation of visual images (the input of the retina) into descriptions of the world that can interface with other thought processes and elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.
Recognition
The classical problem in computer vision, image processing, and machine vision is that of determining whether or not the image data contains some specific object, feature, or activity. Different varieties of recognition problem are described in the literature.
Object recognition (also called object classification)one or several pre-specified or learned objects or object classes can be recognized, usually together with their 2D positions in the image or 3D poses in the scene. Blippar, Google Goggles, and LikeThat provide stand-alone programs that illustrate this functionality.
Identificationan individual instance of an object is recognized. Examples include identification of a specific person's face or fingerprint, identification of handwritten digits, or the identification of a specific vehicle.
Detectionthe image data are scanned for specific objects along with their locations. Examples include the detection of an obstacle in the car's field of view and possible abnormal cells or tissues in medical images or the detection of a vehicle in an automatic road toll system. Detection based on relatively simple and fast computations is sometimes used for finding smaller regions of interesting image data which can be further analyzed by more computationally demanding techniques to produce a correct interpretation.
Currently, the best algorithms for such tasks are based on convolutional neural networks. An illustration of their capabilities is given by the ImageNet Large Scale Visual Recognition Challenge; this is a benchmark in object classification and detection, with millions of images and 1000 object classes used in the competition. Performance of convolutional neural networks on the ImageNet tests is now close to that of humans. The best algorithms still struggle with objects that are small or thin, such as a small ant on the stem of a flower or a person holding a quill in their hand. They also have trouble with images that have been distorted with filters (an increasingly common phenomenon with modern digital cameras). By contrast, those kinds of images rarely trouble humans. Humans, however, tend to have trouble with other issues. For example, they are not good at classifying objects into fine-grained classes, such as the particular breed of dog or species of bird, whereas convolutional neural networks handle this with ease.
Several specialized tasks based on recognition exist, such as:
Content-based image retrievalfinding all images in a larger set of images which have a specific content. The content can be specified in different ways, for example in terms of similarity relative to a target image (give me all images similar to image X) by utilizing reverse image search techniques, or in terms of high-level search criteria given as text input (give me all images which contain many houses, are taken during winter and have no cars in them).
Pose estimationestimating the position or orientation of a specific object relative to the camera. An example application for this technique would be assisting a robot arm in retrieving objects from a conveyor belt in an assembly line situation or picking parts from a bin.
Optical character recognition (OCR)identifying characters in images of printed or handwritten text, usually with a view to encoding the text in a format more amenable to editing or indexing (e.g. ASCII). A related task is reading of 2D codes such as data matrix and QR codes.
Facial recognition a technology that enables the matching of faces in digital images or video frames to a face database, which is now widely used for mobile phone facelock, smart door locking, etc.
Emotion recognitiona subset of facial recognition, emotion recognition refers to the process of classifying human emotions. Psychologists caution, however, that internal emotions cannot be reliably detected from faces.
Shape Recognition Technology (SRT) in people counter systems differentiating human beings (head and shoulder patterns) from objects.
Human activity recognition - deals with recognizing the activity from a series of video frames, such as, if the person is picking up an object or walking.
Motion analysis
Several tasks relate to motion estimation, where an image sequence is processed to produce an estimate of the velocity either at each points in the image or in the 3D scene or even of the camera that produces the images. Examples of such tasks are:
Egomotiondetermining the 3D rigid motion (rotation and translation) of the camera from an image sequence produced by the camera.
Trackingfollowing the movements of a (usually) smaller set of interest points or objects (e.g., vehicles, objects, humans or other organisms) in the image sequence. This has vast industry applications as most high-running machinery can be monitored in this way.
Optical flowto determine, for each point in the image, how that point is moving relative to the image plane, i.e., its apparent motion. This motion is a result of both how the corresponding 3D point is moving in the scene and how the camera is moving relative to the scene.
Scene reconstruction
Given one or (typically) more images of a scene, or a video, scene reconstruction aims at computing a 3D model of the scene. In the simplest case, the model can be a set of 3D points. More sophisticated methods produce a complete 3D surface model. The advent of 3D imaging not requiring motion or scanning, and related processing algorithms is enabling rapid advances in this field. Grid-based 3D sensing can be used to acquire 3D images from multiple angles. Algorithms are now available to stitch multiple 3D images together into point clouds and 3D models.
Image restoration
Image restoration comes into the picture when the original image is degraded or damaged due to some external factors like lens wrong positioning, transmission interference, low lighting or motion blurs, etc., which is referred to as noise. When the images are degraded or damaged, the information to be extracted from them also gets damaged. Therefore, we need to recover or restore the image as it was intended to be. The aim of image restoration is the removal of noise (sensor noise, motion blur, etc.) from images. The simplest possible approach for noise removal is various types of filters, such as low-pass filters or median filters. More sophisticated methods assume a model of how the local image structures look to distinguish them from noise. By first analyzing the image data in terms of the local image structures, such as lines or edges, and then controlling the filtering based on local information from the analysis step, a better level of noise removal is usually obtained compared to the simpler approaches.
An example in this field is inpainting.
System methods
The organization of a computer vision system is highly application-dependent. Some systems are stand-alone applications that solve a specific measurement or detection problem, while others constitute a sub-system of a larger design which, for example, also contains sub-systems for control of mechanical actuators, planning, information databases, man-machine interfaces, etc. The specific implementation of a computer vision system also depends on whether its functionality is pre-specified or if some part of it can be learned or modified during operation. Many functions are unique to the application. There are, however, typical functions that are found in many computer vision systems.
Image acquisition – A digital image is produced by one or several image sensors, which, besides various types of light-sensitive cameras, include range sensors, tomography devices, radar, ultra-sonic cameras, etc. Depending on the type of sensor, the resulting image data is an ordinary 2D image, a 3D volume, or an image sequence. The pixel values typically correspond to light intensity in one or several spectral bands (gray images or colour images) but can also be related to various physical measures, such as depth, absorption or reflectance of sonic or electromagnetic waves, or magnetic resonance imaging.
Pre-processing – Before a computer vision method can be applied to image data in order to extract some specific piece of information, it is usually necessary to process the data in order to ensure that it satisfies certain assumptions implied by the method. Examples are:
Re-sampling to ensure that the image coordinate system is correct.
Noise reduction to ensure that sensor noise does not introduce false information.
Contrast enhancement to ensure that relevant information can be detected.
Scale space representation to enhance image structures at locally appropriate scales.
Feature extraction – Image features at various levels of complexity are extracted from the image data. Typical examples of such features are:
Lines, edges and ridges.
Localized interest points such as corners, blobs or points.
More complex features may be related to texture, shape, or motion.
Detection/segmentation – At some point in the processing, a decision is made about which image points or regions of the image are relevant for further processing. Examples are:
Selection of a specific set of interest points.
Segmentation of one or multiple image regions that contain a specific object of interest.
Segmentation of image into nested scene architecture comprising foreground, object groups, single objects or salient object parts (also referred to as spatial-taxon scene hierarchy), while the visual salience is often implemented as spatial and temporal attention.
Segmentation or co-segmentation of one or multiple videos into a series of per-frame foreground masks while maintaining its temporal semantic continuity.
High-level processing – At this step, the input is typically a small set of data, for example, a set of points or an image region, which is assumed to contain a specific object. The remaining processing deals with, for example:
Verification that the data satisfies model-based and application-specific assumptions.
Estimation of application-specific parameters, such as object pose or object size.
Image recognition – classifying a detected object into different categories.
Image registration – comparing and combining two different views of the same object.
Decision making Making the final decision required for the application, for example:
Pass/fail on automatic inspection applications.
Match/no-match in recognition applications.
Flag for further human review in medical, military, security and recognition applications.
Image-understanding systems
Image-understanding systems (IUS) include three levels of abstraction as follows: low level includes image primitives such as edges, texture elements, or regions; intermediate level includes boundaries, surfaces and volumes; and high level includes objects, scenes, or events. Many of these requirements are entirely topics for further research.
The representational requirements in the designing of IUS for these levels are: representation of prototypical concepts, concept organization, spatial knowledge, temporal knowledge, scaling, and description by comparison and differentiation.
While inference refers to the process of deriving new, not explicitly represented facts from currently known facts, control refers to the process that selects which of the many inference, search, and matching techniques should be applied at a particular stage of processing. Inference and control requirements for IUS are: search and hypothesis activation, matching and hypothesis testing, generation and use of expectations, change and focus of attention, certainty and strength of belief, inference and goal satisfaction.
Hardware
There are many kinds of computer vision systems; however, all of them contain these basic elements: a power source, at least one image acquisition device (camera, ccd, etc.), a processor, and control and communication cables or some kind of wireless interconnection mechanism. In addition, a practical vision system contains software, as well as a display in order to monitor the system. Vision systems for inner spaces, as most industrial ones, contain an illumination system and may be placed in a controlled environment. Furthermore, a completed system includes many accessories, such as camera supports, cables, and connectors.
Most computer vision systems use visible-light cameras passively viewing a scene at frame rates of at most 60 frames per second (usually far slower).
A few computer vision systems use image-acquisition hardware with active illumination or something other than visible light or both, such as structured-light 3D scanners, thermographic cameras, hyperspectral imagers, radar imaging, lidar scanners, magnetic resonance images, side-scan sonar, synthetic aperture sonar, etc. Such hardware captures "images" that are then processed often using the same computer vision algorithms used to process visible-light images.
While traditional broadcast and consumer video systems operate at a rate of 30 frames per second, advances in digital signal processing and consumer graphics hardware has made high-speed image acquisition, processing, and display possible for real-time systems on the order of hundreds to thousands of frames per second. For applications in robotics, fast, real-time video systems are critically important and often can simplify the processing needed for certain algorithms. When combined with a high-speed projector, fast image acquisition allows 3D measurement and feature tracking to be realized.
Egocentric vision systems are composed of a wearable camera that automatically take pictures from a first-person perspective.
As of 2016, vision processing units are emerging as a new class of processors to complement CPUs and graphics processing units (GPUs) in this role.
See also
Chessboard detection
Computational imaging
Computational photography
Computer audition
Egocentric vision
Machine vision glossary
Space mapping
Teknomo–Fernandez algorithm
Vision science
Visual agnosia
Visual perception
Visual system
Lists
Outline of computer vision
List of emerging technologies
Outline of artificial intelligence
References
Further reading
External links
USC Iris computer vision conference list
Computer vision papers on the web – a complete list of papers of the most relevant computer vision conferences.
Computer Vision Online – news, source code, datasets and job offers related to computer vision
CVonline – Bob Fisher's Compendium of Computer Vision.
British Machine Vision Association – supporting computer vision research within the UK via the BMVC and MIUA conferences, Annals of the BMVA (open-source journal), BMVA Summer School and one-day meetings
Computer Vision Container, Joe Hoeller GitHub: Widely adopted open-source container for GPU accelerated computer vision applications. Used by researchers, universities, private companies, as well as the U.S. Gov't.
Image processing
Packaging machinery
Articles containing video clips | Computer vision | [
"Engineering"
] | 6,599 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Industrial machinery",
"Computer vision"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.